Feb 02 13:00:58 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 13:00:58 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:58 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 13:00:59 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 13:01:00 crc kubenswrapper[4721]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.181222 4721 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187176 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187199 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187205 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187210 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187215 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187220 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187226 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187232 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187238 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187243 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187248 4721 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187255 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187266 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187272 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187278 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187283 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187289 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187294 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187299 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187305 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187312 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187318 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187323 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187329 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187335 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187340 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187345 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187350 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187357 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187363 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187368 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187373 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187378 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187383 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187387 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187393 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187398 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187403 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187412 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187419 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187425 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187431 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187436 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187441 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187446 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187452 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187457 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187462 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187467 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187472 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187477 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187482 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187486 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187491 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187496 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187501 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187505 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187510 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187515 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187520 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187524 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187529 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187534 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187540 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187545 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187550 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187554 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187560 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187565 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187572 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.187598 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187697 4721 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187708 4721 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187717 4721 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187724 4721 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187731 4721 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187737 4721 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187745 4721 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187752 4721 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187758 4721 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187764 4721 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187770 4721 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187776 4721 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187782 4721 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187788 4721 flags.go:64] FLAG: --cgroup-root="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187794 4721 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187800 4721 flags.go:64] FLAG: --client-ca-file="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187806 4721 flags.go:64] FLAG: --cloud-config="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187811 4721 flags.go:64] FLAG: --cloud-provider="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187817 4721 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187825 4721 flags.go:64] FLAG: --cluster-domain="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187830 4721 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187836 4721 flags.go:64] FLAG: --config-dir="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187842 4721 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187848 4721 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187856 4721 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187862 4721 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187867 4721 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187873 4721 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187879 4721 flags.go:64] FLAG: --contention-profiling="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187885 4721 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187893 4721 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187899 4721 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187905 4721 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187915 4721 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187921 4721 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187927 4721 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187932 4721 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187938 4721 flags.go:64] FLAG: --enable-server="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187944 4721 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187951 4721 flags.go:64] FLAG: --event-burst="100" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187957 4721 flags.go:64] FLAG: --event-qps="50" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187963 4721 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187969 4721 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187974 4721 flags.go:64] FLAG: --eviction-hard="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187981 4721 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187987 4721 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187992 4721 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.187998 4721 flags.go:64] FLAG: --eviction-soft="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188003 4721 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188009 4721 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188015 4721 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188020 4721 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188026 4721 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188031 4721 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188037 4721 flags.go:64] FLAG: --feature-gates="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188044 4721 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188050 4721 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188055 4721 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188085 4721 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188091 4721 flags.go:64] FLAG: --healthz-port="10248" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188097 4721 flags.go:64] FLAG: --help="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188103 4721 flags.go:64] FLAG: --hostname-override="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188109 4721 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188116 4721 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188121 4721 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188127 4721 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188133 4721 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188139 4721 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188145 4721 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188151 4721 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188156 4721 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188162 4721 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188168 4721 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188173 4721 flags.go:64] FLAG: --kube-reserved="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188179 4721 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188185 4721 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188191 4721 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188197 4721 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188203 4721 flags.go:64] FLAG: --lock-file="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188208 4721 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188214 4721 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188219 4721 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188228 4721 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188234 4721 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188240 4721 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188246 4721 flags.go:64] FLAG: --logging-format="text" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188252 4721 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188259 4721 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188265 4721 flags.go:64] FLAG: --manifest-url="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188270 4721 flags.go:64] FLAG: --manifest-url-header="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188278 4721 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188283 4721 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188291 4721 flags.go:64] FLAG: --max-pods="110" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188297 4721 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188303 4721 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188310 4721 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188316 4721 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188322 4721 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188328 4721 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188334 4721 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188347 4721 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188353 4721 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188359 4721 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188365 4721 flags.go:64] FLAG: --pod-cidr="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188371 4721 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188380 4721 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188385 4721 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188391 4721 flags.go:64] FLAG: --pods-per-core="0" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188397 4721 flags.go:64] FLAG: --port="10250" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188403 4721 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188409 4721 flags.go:64] FLAG: --provider-id="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188414 4721 flags.go:64] FLAG: --qos-reserved="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188420 4721 flags.go:64] FLAG: --read-only-port="10255" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188426 4721 flags.go:64] FLAG: --register-node="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188432 4721 flags.go:64] FLAG: --register-schedulable="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188437 4721 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188447 4721 flags.go:64] FLAG: --registry-burst="10" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188452 4721 flags.go:64] FLAG: --registry-qps="5" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188458 4721 flags.go:64] FLAG: --reserved-cpus="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188463 4721 flags.go:64] FLAG: --reserved-memory="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188470 4721 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188476 4721 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188481 4721 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188487 4721 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188493 4721 flags.go:64] FLAG: --runonce="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188498 4721 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188504 4721 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188510 4721 flags.go:64] FLAG: --seccomp-default="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188516 4721 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188522 4721 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188527 4721 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188533 4721 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188539 4721 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188544 4721 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188550 4721 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188555 4721 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188561 4721 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188566 4721 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188572 4721 flags.go:64] FLAG: --system-cgroups="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188578 4721 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188587 4721 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188592 4721 flags.go:64] FLAG: --tls-cert-file="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188597 4721 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188604 4721 flags.go:64] FLAG: --tls-min-version="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188610 4721 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188616 4721 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188621 4721 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188627 4721 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188632 4721 flags.go:64] FLAG: --v="2" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188640 4721 flags.go:64] FLAG: --version="false" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188648 4721 flags.go:64] FLAG: --vmodule="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188654 4721 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.188660 4721 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188824 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188831 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188837 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188843 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188848 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188853 4721 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188858 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188863 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188868 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188873 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188878 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188882 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188887 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188892 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188897 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188902 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188907 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188912 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188939 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188944 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188949 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188955 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188960 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188970 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188976 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188980 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188985 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188990 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.188995 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189001 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189007 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189013 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189018 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189024 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189030 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189035 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189040 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189046 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189053 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189058 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189077 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189083 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189088 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189093 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189098 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189102 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189107 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189112 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189117 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189122 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189127 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189132 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189136 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189141 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189146 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189151 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189156 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189160 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189165 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189171 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189176 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189182 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189188 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189194 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189199 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189204 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189209 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189214 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189219 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189225 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.189231 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.189246 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.201313 4721 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.201367 4721 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201538 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201569 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201579 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201588 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201599 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201611 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201619 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201628 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201636 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201645 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201653 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201665 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201677 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201686 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201696 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201705 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201715 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201725 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201733 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201743 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201752 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201764 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201776 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201787 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201796 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201806 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201816 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201826 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201835 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201844 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201853 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201862 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201870 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201878 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201910 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201919 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201927 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201936 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201946 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201956 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201966 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201974 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201982 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201991 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.201999 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202008 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202016 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202024 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202033 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202041 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202049 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202058 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202104 4721 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202116 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202128 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202140 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202152 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202187 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202198 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202210 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202220 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202231 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202241 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202254 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202265 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202275 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202286 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202297 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202308 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202320 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202336 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.202354 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202609 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202624 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202634 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202644 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202654 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202663 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202671 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202679 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202691 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202704 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202715 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202726 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202735 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202745 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202754 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202763 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202772 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202785 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202795 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202804 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202814 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202824 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202833 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202841 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202850 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202858 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202867 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202875 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202883 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202892 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202900 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202909 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202917 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202925 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202935 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202944 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202952 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202960 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202969 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202977 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202985 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.202994 4721 feature_gate.go:330] unrecognized feature gate: Example Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203002 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203010 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203018 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203026 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203038 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203050 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203058 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203103 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203115 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203127 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203136 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203144 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203152 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203160 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203169 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203178 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203186 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203194 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203205 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203216 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203226 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203235 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203245 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203253 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203262 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203271 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203280 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203288 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.203299 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.203312 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.203587 4721 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.209866 4721 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.210018 4721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.211938 4721 server.go:997] "Starting client certificate rotation" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.211991 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.212222 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-16 11:31:02.977802238 +0000 UTC Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.212336 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.243678 4721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.244615 4721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.249552 4721 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.262827 4721 log.go:25] "Validated CRI v1 runtime API" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.298089 4721 log.go:25] "Validated CRI v1 image API" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.299730 4721 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.303676 4721 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-12-56-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.303717 4721 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.324043 4721 manager.go:217] Machine: {Timestamp:2026-02-02 13:01:00.321232183 +0000 UTC m=+0.623746612 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a18387c5-ff07-4cdd-8a5b-70ab978f8648 BootID:a135069f-e3e0-400d-820a-6a71848b843d Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:59:f0:b8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:59:f0:b8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ed:04:21 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1a:d8:fe Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:93:2e:34 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6d:5b:d9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:78:7e:68:d7:6f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:d5:bd:3c:20:40 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.324333 4721 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.324550 4721 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.327465 4721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.327830 4721 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.327901 4721 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.328271 4721 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.328291 4721 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.328925 4721 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.328971 4721 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.329956 4721 state_mem.go:36] "Initialized new in-memory state store" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.330202 4721 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.334769 4721 kubelet.go:418] "Attempting to sync node with API server" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.335012 4721 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.335148 4721 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.335175 4721 kubelet.go:324] "Adding apiserver pod source" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.335195 4721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.339778 4721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.339940 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.340062 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.340129 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.340280 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.340711 4721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.342547 4721 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343811 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343831 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343837 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343844 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343855 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343863 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343872 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343884 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343892 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343900 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343909 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.343916 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.344922 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.345352 4721 server.go:1280] "Started kubelet" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.345754 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.346387 4721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 13:01:00 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.346966 4721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.347412 4721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349251 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349312 4721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349322 4721 server.go:460] "Adding debug handlers to kubelet server" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349363 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:54:03.674628858 +0000 UTC Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349563 4721 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.349578 4721 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.350091 4721 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.350176 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.351898 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="200ms" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.352331 4721 factory.go:55] Registering systemd factory Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.352355 4721 factory.go:221] Registration of the systemd container factory successfully Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.351449 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.247:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18906f829e16b01c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:01:00.345323548 +0000 UTC m=+0.647837927,LastTimestamp:2026-02-02 13:01:00.345323548 +0000 UTC m=+0.647837927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.353628 4721 factory.go:153] Registering CRI-O factory Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.353677 4721 factory.go:221] Registration of the crio container factory successfully Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.353807 4721 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.353848 4721 factory.go:103] Registering Raw factory Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.353878 4721 manager.go:1196] Started watching for new ooms in manager Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.357965 4721 manager.go:319] Starting recovery of all containers Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.353742 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.358736 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368459 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368528 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368546 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368562 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368576 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368620 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368635 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368650 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368665 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368678 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368693 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368706 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368719 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368732 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368743 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368753 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368766 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368781 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368793 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368834 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368876 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368890 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368902 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368912 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368933 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368950 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.368974 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369596 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369623 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369638 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369651 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369721 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369738 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369754 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369767 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369781 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369794 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369808 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369819 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369831 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369846 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369858 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369878 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369891 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369904 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369926 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369943 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369956 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.369970 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370022 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370038 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370051 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370089 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370104 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370120 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370135 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370150 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370243 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370258 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370273 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370287 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370300 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370314 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370330 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370345 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370363 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370376 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370391 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370407 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370422 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370437 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370453 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370468 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370483 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370619 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370643 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370659 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370674 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370687 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370700 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370713 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370730 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370742 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370754 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370767 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370779 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370794 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370807 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370820 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.370832 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371025 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371043 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371056 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371094 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371111 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371123 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371138 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371154 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371166 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371178 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371193 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371212 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371231 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371246 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371266 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371280 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371302 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371318 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371331 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371361 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371376 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371390 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371429 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371453 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371466 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371502 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371516 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371530 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371543 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371556 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371623 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371660 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371675 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371690 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371703 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371716 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371728 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371740 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371751 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371765 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371780 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371792 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371804 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371817 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371829 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371842 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371855 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371867 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371879 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371893 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371929 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371944 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371957 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371969 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371982 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.371996 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372009 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372024 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372038 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372050 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372082 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372097 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372110 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372122 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.372136 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374086 4721 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374122 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374163 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374177 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374190 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374200 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374210 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374223 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374233 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374242 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374250 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374287 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374299 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374310 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374321 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374351 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374365 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374380 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374392 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374404 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374417 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374430 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374446 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374459 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374471 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374484 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374499 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374514 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374527 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374541 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374554 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374567 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374579 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374592 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374606 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374621 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374636 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374652 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.374666 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375227 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375283 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375311 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375381 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375410 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375424 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375437 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375449 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375464 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375476 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375488 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375502 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375514 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375525 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375536 4721 reconstruct.go:97] "Volume reconstruction finished" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.375544 4721 reconciler.go:26] "Reconciler: start to sync state" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.378761 4721 manager.go:324] Recovery completed Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.395301 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.398086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.398126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.398139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.400143 4721 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.400487 4721 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.400514 4721 state_mem.go:36] "Initialized new in-memory state store" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.406582 4721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.408304 4721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.408360 4721 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.408393 4721 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.408450 4721 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.410176 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.410262 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.428746 4721 policy_none.go:49] "None policy: Start" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.430330 4721 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.430379 4721 state_mem.go:35] "Initializing new in-memory state store" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.451268 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486100 4721 manager.go:334] "Starting Device Plugin manager" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486171 4721 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486186 4721 server.go:79] "Starting device plugin registration server" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486657 4721 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486686 4721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486831 4721 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486920 4721 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.486933 4721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.500505 4721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.508775 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.508877 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510373 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510534 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510827 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.510872 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511507 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511607 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.511636 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512173 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512183 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512196 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512217 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512308 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512508 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512512 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512539 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512878 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.512992 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513149 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513185 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513914 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513937 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.513961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514190 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514212 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514474 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.514971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.552564 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="400ms" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579061 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579132 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579185 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579209 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579232 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579256 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579280 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579307 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579330 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579353 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579374 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579509 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.579614 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.587047 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.588197 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.588240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.588255 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.588286 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.588869 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.247:6443: connect: connection refused" node="crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680456 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680551 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680589 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680654 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680684 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680720 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680749 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680785 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680817 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680883 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680902 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680916 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680951 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680985 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680988 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681108 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681186 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681237 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681281 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681339 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681428 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681475 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.680954 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.681575 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.789760 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.791468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.791534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.791547 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.791578 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.792284 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.247:6443: connect: connection refused" node="crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.839720 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.844879 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.874007 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.880651 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: I0202 13:01:00.887091 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.904643 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-802b419d02dfb173a42e0158533bcfcdfb56415cd6e2da8001cdffcdb0e621ae WatchSource:0}: Error finding container 802b419d02dfb173a42e0158533bcfcdfb56415cd6e2da8001cdffcdb0e621ae: Status 404 returned error can't find the container with id 802b419d02dfb173a42e0158533bcfcdfb56415cd6e2da8001cdffcdb0e621ae Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.913382 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f44f5e6d1d8ff0089a72a7776829f345b01fc5b6830079282b1cd20148941989 WatchSource:0}: Error finding container f44f5e6d1d8ff0089a72a7776829f345b01fc5b6830079282b1cd20148941989: Status 404 returned error can't find the container with id f44f5e6d1d8ff0089a72a7776829f345b01fc5b6830079282b1cd20148941989 Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.919449 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e6c3d0908ceddba8a08abae2f315012e2e4c8862bd115a2f2dea5beecbbfeba1 WatchSource:0}: Error finding container e6c3d0908ceddba8a08abae2f315012e2e4c8862bd115a2f2dea5beecbbfeba1: Status 404 returned error can't find the container with id e6c3d0908ceddba8a08abae2f315012e2e4c8862bd115a2f2dea5beecbbfeba1 Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.926850 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-74208df8f3f5d9bd44a8d1e2cb6bf44e3116d923706e79704980c5c727a854f3 WatchSource:0}: Error finding container 74208df8f3f5d9bd44a8d1e2cb6bf44e3116d923706e79704980c5c727a854f3: Status 404 returned error can't find the container with id 74208df8f3f5d9bd44a8d1e2cb6bf44e3116d923706e79704980c5c727a854f3 Feb 02 13:01:00 crc kubenswrapper[4721]: W0202 13:01:00.931760 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b61773857bafdd53f6da64c48b33b41c1c0046210bb4e4c19c214b004b9a92e8 WatchSource:0}: Error finding container b61773857bafdd53f6da64c48b33b41c1c0046210bb4e4c19c214b004b9a92e8: Status 404 returned error can't find the container with id b61773857bafdd53f6da64c48b33b41c1c0046210bb4e4c19c214b004b9a92e8 Feb 02 13:01:00 crc kubenswrapper[4721]: E0202 13:01:00.954519 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="800ms" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.193114 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.194604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.194696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.194727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.194773 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.195567 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.247:6443: connect: connection refused" node="crc" Feb 02 13:01:01 crc kubenswrapper[4721]: W0202 13:01:01.346201 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.346308 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.346763 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.349977 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:49:23.531682892 +0000 UTC Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.414799 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"74208df8f3f5d9bd44a8d1e2cb6bf44e3116d923706e79704980c5c727a854f3"} Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.416008 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6c3d0908ceddba8a08abae2f315012e2e4c8862bd115a2f2dea5beecbbfeba1"} Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.417360 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f44f5e6d1d8ff0089a72a7776829f345b01fc5b6830079282b1cd20148941989"} Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.418892 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"802b419d02dfb173a42e0158533bcfcdfb56415cd6e2da8001cdffcdb0e621ae"} Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.420179 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b61773857bafdd53f6da64c48b33b41c1c0046210bb4e4c19c214b004b9a92e8"} Feb 02 13:01:01 crc kubenswrapper[4721]: W0202 13:01:01.499015 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.499110 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:01 crc kubenswrapper[4721]: W0202 13:01:01.506194 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.506257 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:01 crc kubenswrapper[4721]: W0202 13:01:01.544451 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.545023 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.756049 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="1.6s" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.996057 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.998767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.998848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.998871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:01 crc kubenswrapper[4721]: I0202 13:01:01.998921 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:01 crc kubenswrapper[4721]: E0202 13:01:01.999649 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.247:6443: connect: connection refused" node="crc" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.347863 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.350957 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:51:19.910009421 +0000 UTC Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.352271 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:01:02 crc kubenswrapper[4721]: E0202 13:01:02.353902 4721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.426762 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e" exitCode=0 Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.426880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.426998 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.428820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.428879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.428896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.430438 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="35163308ffda655458ed288261db7ddb2994d3ae649905d5f37163b106bde444" exitCode=0 Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.430514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"35163308ffda655458ed288261db7ddb2994d3ae649905d5f37163b106bde444"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.430565 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.433344 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.433382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.433398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.434251 4721 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16" exitCode=0 Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.434380 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.434377 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.436497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.436530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.436541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.438660 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c" exitCode=0 Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.438791 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.438812 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.440185 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.440226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.440238 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.442635 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444585 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444637 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444656 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444674 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f"} Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.444736 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.445851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.445895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:02 crc kubenswrapper[4721]: I0202 13:01:02.445914 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.258809 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.247:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18906f829e16b01c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:01:00.345323548 +0000 UTC m=+0.647837927,LastTimestamp:2026-02-02 13:01:00.345323548 +0000 UTC m=+0.647837927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:01:03 crc kubenswrapper[4721]: W0202 13:01:03.270731 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.270804 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.347432 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.351618 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:39:19.860802372 +0000 UTC Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.357277 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="3.2s" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.392449 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:03 crc kubenswrapper[4721]: W0202 13:01:03.423379 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.423495 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.452195 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.452264 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.452282 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.452414 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.453555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.453589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.453602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.459832 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.459891 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.459904 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.459915 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.462154 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6" exitCode=0 Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.462221 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.462304 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.463255 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.463289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.463302 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.466884 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.466917 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.466861 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"70c3e63a8ff69cfcf15df77e463972f3a74bb28949d22cd1ef33a2ae8ff9aaa1"} Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468925 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.468940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.472385 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:03 crc kubenswrapper[4721]: W0202 13:01:03.598353 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.247:6443: connect: connection refused Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.598457 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.247:6443: connect: connection refused" logger="UnhandledError" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.600769 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.601973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.602016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.602029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:03 crc kubenswrapper[4721]: I0202 13:01:03.602056 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:03 crc kubenswrapper[4721]: E0202 13:01:03.602563 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.247:6443: connect: connection refused" node="crc" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.015179 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.023822 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.351856 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:03:09.87702514 +0000 UTC Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.474229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d"} Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.474384 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.475908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.476265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.476526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.477788 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5" exitCode=0 Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.477950 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.477982 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.477983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5"} Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.478118 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.478203 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479669 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.479990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:04 crc kubenswrapper[4721]: I0202 13:01:04.480014 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.352234 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:15:04.502403127 +0000 UTC Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487096 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e"} Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487166 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8"} Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd"} Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487212 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90"} Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487218 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487264 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487281 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487353 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.487291 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.488624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.488664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.488677 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.489657 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.489982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.490021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.490297 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.490497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:05 crc kubenswrapper[4721]: I0202 13:01:05.490698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.009504 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.150367 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.170916 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.352875 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:00:17.131440077 +0000 UTC Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.390155 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.496840 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa"} Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.496944 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.497012 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.497033 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.497145 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.497250 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.498941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.499408 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.802959 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.805207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.805276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.805300 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:06 crc kubenswrapper[4721]: I0202 13:01:06.805344 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.353232 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:39:09.572221043 +0000 UTC Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.499873 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.499982 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.500103 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501109 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501156 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501805 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:07 crc kubenswrapper[4721]: I0202 13:01:07.501817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.354530 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:52:01.787394152 +0000 UTC Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.397423 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.397688 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.399050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.399115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.399127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.781580 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.781865 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.783718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.783785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:08 crc kubenswrapper[4721]: I0202 13:01:08.783798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.171330 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.171427 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.354654 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:11:44.718899712 +0000 UTC Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.965969 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.966296 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.967673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.967741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:09 crc kubenswrapper[4721]: I0202 13:01:09.967764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:10 crc kubenswrapper[4721]: I0202 13:01:10.355684 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:59:00.36063634 +0000 UTC Feb 02 13:01:10 crc kubenswrapper[4721]: E0202 13:01:10.500744 4721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 13:01:11 crc kubenswrapper[4721]: I0202 13:01:11.356762 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:32:52.876264049 +0000 UTC Feb 02 13:01:12 crc kubenswrapper[4721]: I0202 13:01:12.357922 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:45:15.433587284 +0000 UTC Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.358395 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:54:37.150217509 +0000 UTC Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.605785 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.606129 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.607631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.607680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:13 crc kubenswrapper[4721]: I0202 13:01:13.607690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:14 crc kubenswrapper[4721]: I0202 13:01:14.098807 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 13:01:14 crc kubenswrapper[4721]: I0202 13:01:14.098899 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 13:01:14 crc kubenswrapper[4721]: I0202 13:01:14.104692 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 13:01:14 crc kubenswrapper[4721]: I0202 13:01:14.104772 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 13:01:14 crc kubenswrapper[4721]: I0202 13:01:14.359172 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:35:29.13958726 +0000 UTC Feb 02 13:01:15 crc kubenswrapper[4721]: I0202 13:01:15.359312 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:10:55.962623863 +0000 UTC Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.018038 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.018275 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.019495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.019533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.019542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.024436 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.360217 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:22:10.738301974 +0000 UTC Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.526353 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.527457 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.527494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:16 crc kubenswrapper[4721]: I0202 13:01:16.527504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:17 crc kubenswrapper[4721]: I0202 13:01:17.360349 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:12:36.504338483 +0000 UTC Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.361835 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:26:42.67472329 +0000 UTC Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.402158 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.402364 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.404110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.404172 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:18 crc kubenswrapper[4721]: I0202 13:01:18.404190 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.090053 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093742 4721 trace.go:236] Trace[192135690]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:01:07.975) (total time: 11118ms): Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[192135690]: ---"Objects listed" error: 11118ms (13:01:19.093) Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[192135690]: [11.118650069s] [11.118650069s] END Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093776 4721 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093762 4721 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093776 4721 trace.go:236] Trace[660090621]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:01:06.514) (total time: 12579ms): Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[660090621]: ---"Objects listed" error: 12579ms (13:01:19.093) Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[660090621]: [12.579577534s] [12.579577534s] END Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093843 4721 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.093970 4721 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.094053 4721 trace.go:236] Trace[478236917]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 13:01:04.461) (total time: 14632ms): Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[478236917]: ---"Objects listed" error: 14631ms (13:01:19.093) Feb 02 13:01:19 crc kubenswrapper[4721]: Trace[478236917]: [14.63208771s] [14.63208771s] END Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.094112 4721 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.095106 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.105452 4721 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.119716 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58242->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.119807 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58242->192.168.126.11:17697: read: connection reset by peer" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.120552 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.120675 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.172271 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.172346 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.343941 4721 apiserver.go:52] "Watching apiserver" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.347984 4721 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.348240 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.348587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.348670 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.348686 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.348800 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.348960 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.349017 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.349586 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.349634 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.349684 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.350815 4721 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351129 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351196 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351356 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351379 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351567 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351585 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351601 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351655 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.351920 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.362003 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:06:56.180539798 +0000 UTC Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.376174 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.385905 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395495 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395781 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395800 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395819 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395834 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395850 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395864 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395877 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395893 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395910 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395925 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395939 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395956 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395971 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.395988 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396004 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396021 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396042 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396104 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396123 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396142 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396162 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396170 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396184 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396239 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396262 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396284 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396308 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396329 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396351 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396374 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396397 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396420 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396443 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396459 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396471 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396492 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396517 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396541 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396565 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396590 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396613 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396635 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396659 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396686 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396712 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396733 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396756 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396801 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396824 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396845 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396867 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396888 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396909 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396964 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.396986 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397015 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397037 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397037 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397080 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397088 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397120 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397146 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397199 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397223 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397250 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397272 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397314 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397321 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397337 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397363 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397389 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397396 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397412 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397443 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397469 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397491 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397507 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397516 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397540 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397558 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397589 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397613 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397639 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397663 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397685 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397729 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397743 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397752 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397776 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397811 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397834 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397856 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397883 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397930 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397936 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397951 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.397984 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398039 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398084 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398089 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398107 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398165 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398166 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398212 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398231 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398241 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398247 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398280 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398300 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398318 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398314 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398355 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398374 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398392 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398416 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398414 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398433 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398429 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398535 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398575 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398583 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398618 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398642 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398670 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398707 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398721 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398730 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398755 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398754 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398774 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398783 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398801 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398812 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398841 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398867 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398889 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398944 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398963 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398978 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.398997 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399015 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399033 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399048 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399078 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399080 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399113 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399132 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399175 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399180 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399223 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.400526 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.400814 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401026 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401043 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401267 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401354 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401479 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.399352 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401581 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401611 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402533 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403034 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403096 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403142 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403184 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403213 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403256 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403288 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403919 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403955 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404047 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404107 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404134 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404157 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404183 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404208 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404254 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404284 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404314 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404339 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404364 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404388 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404409 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404432 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404454 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404477 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404503 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404528 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404554 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404577 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404602 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404628 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404677 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404701 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404724 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404745 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404767 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404791 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404814 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404838 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404865 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404889 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404912 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404960 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404984 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405010 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405033 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405056 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405097 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405112 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405128 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405145 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405164 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405180 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405197 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405213 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405231 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405248 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405313 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405332 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405353 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405370 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405389 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405450 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405466 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405485 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405581 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405600 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405619 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405903 4721 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405920 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405931 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405942 4721 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405952 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405964 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405974 4721 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405983 4721 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405993 4721 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406003 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406014 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406024 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406034 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406044 4721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406055 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406089 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406099 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406109 4721 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406119 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406128 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406140 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406150 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406160 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406170 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406180 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406190 4721 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406200 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406210 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406219 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406229 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406239 4721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406249 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406259 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406269 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406279 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406291 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401781 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401798 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402020 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.401378 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402328 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402343 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.412992 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402779 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.402784 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403016 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403039 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403101 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403288 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403312 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403577 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403597 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403623 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.403207 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404208 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404223 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404264 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404354 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404382 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404621 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.404701 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405119 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405119 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405164 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405376 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405520 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405528 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405640 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405833 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405838 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405856 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.405928 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406080 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406276 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406362 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406537 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406652 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.406715 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.407081 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.407323 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.407398 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.407697 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.407811 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:19.907788418 +0000 UTC m=+20.210302807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408186 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408275 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408320 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408493 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408525 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408925 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.408900 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.409111 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.409222 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.409578 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.409664 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.410058 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.410813 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411209 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411220 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411348 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411446 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411532 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411653 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411708 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411795 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.411879 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.412238 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.412887 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413302 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413322 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413529 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413430 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413886 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413976 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.413988 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414169 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414292 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414465 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414551 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414621 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414731 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414734 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414850 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.414872 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.415011 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.415375 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.415370 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.415610 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.416371 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.416540 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:19.916513449 +0000 UTC m=+20.219027838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.416541 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.416844 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.417317 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.417393 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.417739 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.418209 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.418311 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.417775 4721 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.418603 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:19.918550463 +0000 UTC m=+20.221064852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.419624 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.419691 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.418868 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.421549 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.428013 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.428275 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.428688 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.428712 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.428727 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.428781 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:19.928765534 +0000 UTC m=+20.231280003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.428878 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429113 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429135 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429146 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429128 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429207 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429248 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429668 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.429711 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.430348 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.430637 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.430964 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.431764 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.431812 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.431886 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:19.931867847 +0000 UTC m=+20.234382236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.433588 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.433654 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434091 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434509 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434642 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434672 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434722 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434746 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.434968 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.435134 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.435230 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.435487 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.435544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.435705 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.436149 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.436295 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.436857 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.437467 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.437823 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.438119 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.438487 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.438581 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.438615 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.438833 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.439117 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.439261 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.439430 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.439988 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.441412 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.441554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.441708 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.441745 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.441994 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.442048 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.443010 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.444432 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.445044 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.445057 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.445102 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.445252 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.446334 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.446449 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.446957 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.447277 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.447378 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.447722 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.452130 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.453412 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.463054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.464088 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.465062 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.470802 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507685 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507738 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507751 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507815 4721 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507828 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.507948 4721 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508095 4721 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508111 4721 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508122 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508133 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508143 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508152 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508163 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508172 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508180 4721 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508188 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508220 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508229 4721 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508237 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508245 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508270 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508282 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508330 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508348 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508359 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508373 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508383 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508394 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508405 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508418 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508429 4721 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508440 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508450 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508461 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508471 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508482 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508491 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508513 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508524 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508534 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508544 4721 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508554 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508564 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508574 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508584 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508594 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508605 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508619 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508631 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508642 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508653 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508667 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508686 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508697 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508709 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508720 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508731 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508766 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508778 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508790 4721 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508802 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508813 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508825 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508839 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508851 4721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508874 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508887 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508898 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508909 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508921 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508932 4721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508943 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508955 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508966 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508976 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508988 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.508999 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509009 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509020 4721 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509032 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509043 4721 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509055 4721 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509084 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509096 4721 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509108 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509120 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509131 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509142 4721 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509153 4721 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509163 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509175 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509185 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509196 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509206 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509217 4721 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509228 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509239 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509256 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509267 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509284 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509294 4721 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509305 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509315 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509326 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509336 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509347 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509357 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509375 4721 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509387 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509397 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509408 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509418 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509429 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509440 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509451 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509462 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509473 4721 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509483 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509494 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509505 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509517 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509529 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509541 4721 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509551 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509562 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509574 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509585 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509596 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509606 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509622 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509637 4721 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509648 4721 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509660 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509670 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509681 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509691 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509702 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509713 4721 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509723 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509734 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509745 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509757 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509769 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509780 4721 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509790 4721 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509802 4721 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509813 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509823 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509834 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509844 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509854 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509864 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509876 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509887 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509899 4721 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509910 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509921 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509933 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509945 4721 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.509957 4721 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.535893 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.538031 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d" exitCode=255 Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.538114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d"} Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.548943 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.550773 4721 scope.go:117] "RemoveContainer" containerID="db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.551470 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.561319 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.576185 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.586113 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.595562 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.607441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.662995 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.668373 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.675975 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 13:01:19 crc kubenswrapper[4721]: W0202 13:01:19.692422 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-19f26c18af962b39775d2f11ae06ed7c602e93044b08d29c2f05ec27c97ef84b WatchSource:0}: Error finding container 19f26c18af962b39775d2f11ae06ed7c602e93044b08d29c2f05ec27c97ef84b: Status 404 returned error can't find the container with id 19f26c18af962b39775d2f11ae06ed7c602e93044b08d29c2f05ec27c97ef84b Feb 02 13:01:19 crc kubenswrapper[4721]: I0202 13:01:19.913723 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:19 crc kubenswrapper[4721]: E0202 13:01:19.913897 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:20.913863996 +0000 UTC m=+21.216378385 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.015163 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.015282 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.015322 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.015502 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015507 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015625 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015654 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015673 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015686 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:21.015662018 +0000 UTC m=+21.318176427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015714 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:21.015700439 +0000 UTC m=+21.318214838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015797 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015835 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:21.015825372 +0000 UTC m=+21.318339771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015924 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015945 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015956 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.015988 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:21.015979396 +0000 UTC m=+21.318493795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.363134 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:44:05.478460878 +0000 UTC Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.409525 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.409697 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.413435 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.414184 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.415281 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.415916 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.416865 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.417399 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.417999 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.418939 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.419600 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.420601 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.421239 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.422288 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.422741 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.423238 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.424151 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.424628 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.425628 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.426105 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.426810 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.428029 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.428687 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.429653 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.430230 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.431158 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.431779 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.433836 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.435318 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.435950 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.436673 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.437830 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.438511 4721 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.438709 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.441139 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.441928 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.442683 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.444887 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.445035 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.445856 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.446517 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.447887 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.449415 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.449928 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.450760 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.451917 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.452964 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.453550 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.454618 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.455347 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.456534 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.457117 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.458013 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.458682 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.459357 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.460611 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.461246 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.466982 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.488913 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.514456 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.542686 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.542744 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bdabbc19c77153242b2b6ba0235517389b7b070064dfcaddbe90f8cdd80faf89"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.544909 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.546656 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.546921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.547659 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.548764 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"19f26c18af962b39775d2f11ae06ed7c602e93044b08d29c2f05ec27c97ef84b"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.551347 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.551376 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.551386 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d0934c59dd2f80b7ce756beac48bdb978f52b606f888232b5602aa31229ef731"} Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.569295 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.587215 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.603624 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.617630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.632429 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.646239 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.665027 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.681668 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.698485 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:20 crc kubenswrapper[4721]: I0202 13:01:20.924056 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:20 crc kubenswrapper[4721]: E0202 13:01:20.924244 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:22.924224977 +0000 UTC m=+23.226739366 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.024756 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.024812 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.024834 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.024855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.024934 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.024947 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.024969 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.024982 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:23.02496882 +0000 UTC m=+23.327483209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.024982 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025015 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:23.025006291 +0000 UTC m=+23.327520680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025033 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025053 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025056 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025078 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025094 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:23.025085973 +0000 UTC m=+23.327600362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.025109 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:23.025102044 +0000 UTC m=+23.327616443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.364048 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:08:03.944200517 +0000 UTC Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.408960 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:21 crc kubenswrapper[4721]: I0202 13:01:21.409113 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.409154 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:21 crc kubenswrapper[4721]: E0202 13:01:21.409359 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.364601 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:33:01.891982329 +0000 UTC Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.409337 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:22 crc kubenswrapper[4721]: E0202 13:01:22.409562 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.560001 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221"} Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.578212 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.597428 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.621524 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.637757 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.654403 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.676582 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.697600 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:22 crc kubenswrapper[4721]: I0202 13:01:22.941778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:22 crc kubenswrapper[4721]: E0202 13:01:22.941930 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:26.941909546 +0000 UTC m=+27.244423945 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.042332 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.042385 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.042421 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.042469 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042526 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042560 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042563 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042579 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042583 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042595 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042647 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:27.042622409 +0000 UTC m=+27.345136808 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042657 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042675 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:27.04266591 +0000 UTC m=+27.345180309 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042787 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:27.042758903 +0000 UTC m=+27.345273352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042670 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.042852 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:27.042838755 +0000 UTC m=+27.345353224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.365660 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:30:56.03846493 +0000 UTC Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.409693 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.409754 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.409942 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:23 crc kubenswrapper[4721]: E0202 13:01:23.410137 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.641415 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.659362 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.663653 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.664262 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.680740 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.699032 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.714233 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.729605 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.743052 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.756867 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.782639 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.798783 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.813738 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.826031 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.842887 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.856723 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.871275 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:23 crc kubenswrapper[4721]: I0202 13:01:23.885109 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:23Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:24 crc kubenswrapper[4721]: I0202 13:01:24.366756 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:32:26.322880796 +0000 UTC Feb 02 13:01:24 crc kubenswrapper[4721]: I0202 13:01:24.409613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:24 crc kubenswrapper[4721]: E0202 13:01:24.409856 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:24 crc kubenswrapper[4721]: I0202 13:01:24.995223 4721 csr.go:261] certificate signing request csr-pcbw8 is approved, waiting to be issued Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.010641 4721 csr.go:257] certificate signing request csr-pcbw8 is issued Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.090999 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sgp8m"] Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.091458 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.097481 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.098055 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.098498 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.145654 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.161665 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13f3ec54-a7fb-4236-9583-827d960b2086-hosts-file\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.161712 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42qk\" (UniqueName: \"kubernetes.io/projected/13f3ec54-a7fb-4236-9583-827d960b2086-kube-api-access-n42qk\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.174793 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.193300 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.212740 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.229083 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.241358 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.260269 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.262622 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13f3ec54-a7fb-4236-9583-827d960b2086-hosts-file\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.262669 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42qk\" (UniqueName: \"kubernetes.io/projected/13f3ec54-a7fb-4236-9583-827d960b2086-kube-api-access-n42qk\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.262758 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13f3ec54-a7fb-4236-9583-827d960b2086-hosts-file\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.274019 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.284320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42qk\" (UniqueName: \"kubernetes.io/projected/13f3ec54-a7fb-4236-9583-827d960b2086-kube-api-access-n42qk\") pod \"node-resolver-sgp8m\" (UID: \"13f3ec54-a7fb-4236-9583-827d960b2086\") " pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.289148 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.367103 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:23:24.279180963 +0000 UTC Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.404615 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sgp8m" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.408825 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.408880 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.408990 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.409174 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:25 crc kubenswrapper[4721]: W0202 13:01:25.415526 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13f3ec54_a7fb_4236_9583_827d960b2086.slice/crio-18c27a1729151a5401a1154b35ef853fa559bec7a8014eec078050188649df3a WatchSource:0}: Error finding container 18c27a1729151a5401a1154b35ef853fa559bec7a8014eec078050188649df3a: Status 404 returned error can't find the container with id 18c27a1729151a5401a1154b35ef853fa559bec7a8014eec078050188649df3a Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.495475 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.497240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.497289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.497304 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.497381 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.504703 4721 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.505026 4721 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.506236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.506273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.506285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.506303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.506314 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.523961 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.527511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.527557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.527571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.527588 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.527601 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.538685 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.542532 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.542572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.542584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.542600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.542611 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.556295 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.561218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.561270 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.561285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.561306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.561326 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.569780 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sgp8m" event={"ID":"13f3ec54-a7fb-4236-9583-827d960b2086","Type":"ContainerStarted","Data":"18c27a1729151a5401a1154b35ef853fa559bec7a8014eec078050188649df3a"} Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.577245 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.581024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.581048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.581055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.581083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.581092 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.610632 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:25Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:25 crc kubenswrapper[4721]: E0202 13:01:25.610744 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.612330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.612358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.612367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.612383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.612394 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.715287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.715339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.715349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.715364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.715375 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.818272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.818329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.818340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.818355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.818365 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.921377 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.921433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.921452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.921473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:25 crc kubenswrapper[4721]: I0202 13:01:25.921489 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:25Z","lastTransitionTime":"2026-02-02T13:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.009235 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ltw7d"] Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.009548 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x4lhg"] Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.009698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.010111 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.011356 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.011669 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 13:01:26 crc kubenswrapper[4721]: W0202 13:01:26.014215 4721 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 13:01:26 crc kubenswrapper[4721]: E0202 13:01:26.014263 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 13:01:26 crc kubenswrapper[4721]: W0202 13:01:26.014314 4721 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 13:01:26 crc kubenswrapper[4721]: E0202 13:01:26.014328 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.014440 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.014882 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rppjz"] Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015174 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015187 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 12:56:25 +0000 UTC, rotation deadline is 2026-11-02 08:24:05.45723808 +0000 UTC Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015223 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6547h22m39.442018442s for next certificate rotation Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015298 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015450 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.015548 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwcs2"] Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.016283 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.018009 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.019133 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.019334 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.023657 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.023700 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.023714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.023747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.023761 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.024194 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.024194 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.024261 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.024363 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.024476 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.025834 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.027934 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.027988 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.028018 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.041668 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.067767 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.080506 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.097347 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.118578 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.126006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.126050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.126078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.126096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.126110 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.150629 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170215 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-os-release\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170256 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170273 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-multus\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-kubelet\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170348 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-cnibin\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170407 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88f6w\" (UniqueName: \"kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170456 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6j6\" (UniqueName: \"kubernetes.io/projected/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-kube-api-access-nm6j6\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170507 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170533 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-system-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170561 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-system-cni-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170583 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-bin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170606 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2tcn\" (UniqueName: \"kubernetes.io/projected/04b1629d-0184-4975-8d4b-7a32913e7389-kube-api-access-h2tcn\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-mcd-auth-proxy-config\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170649 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cnibin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170670 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-socket-dir-parent\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170692 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-etc-kubernetes\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170717 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170740 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170770 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-k8s-cni-cncf-io\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170811 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-daemon-config\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170827 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170856 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170872 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-netns\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-proxy-tls\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170937 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170951 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170967 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.170981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171030 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-multus-certs\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171055 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-rootfs\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171088 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171122 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171142 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchqf\" (UniqueName: \"kubernetes.io/projected/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-kube-api-access-dchqf\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171162 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171186 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171208 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171228 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171247 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171264 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cni-binary-copy\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171290 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-os-release\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171307 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-hostroot\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171328 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-conf-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.171357 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.178986 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.179108 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.187925 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.192555 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.228772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.228807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.228817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.228834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.228847 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.246454 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.266500 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272313 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchqf\" (UniqueName: \"kubernetes.io/projected/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-kube-api-access-dchqf\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272363 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272388 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272419 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272458 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272477 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cni-binary-copy\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-os-release\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272545 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-hostroot\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-conf-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272590 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-cnibin\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272612 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-os-release\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272632 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272652 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-multus\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272672 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-kubelet\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272699 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88f6w\" (UniqueName: \"kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272723 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6j6\" (UniqueName: \"kubernetes.io/projected/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-kube-api-access-nm6j6\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272749 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-system-cni-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272773 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-system-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272820 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2tcn\" (UniqueName: \"kubernetes.io/projected/04b1629d-0184-4975-8d4b-7a32913e7389-kube-api-access-h2tcn\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272843 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-bin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cnibin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272885 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-socket-dir-parent\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-etc-kubernetes\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272928 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-mcd-auth-proxy-config\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272950 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272972 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.272992 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273032 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273056 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-k8s-cni-cncf-io\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-daemon-config\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273120 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273161 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-netns\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273182 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-proxy-tls\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273202 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273223 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273244 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273265 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273286 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273305 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-multus-certs\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-rootfs\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273456 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-rootfs\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273496 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273527 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273745 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.273952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cnibin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274102 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-hostroot\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274152 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-conf-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274161 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-os-release\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274186 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-os-release\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274198 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274210 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274226 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274244 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-socket-dir-parent\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274265 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-etc-kubernetes\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274271 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-multus\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274328 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-system-cni-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274274 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-cnibin\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274400 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274413 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-kubelet\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274479 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-system-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274479 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274688 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-netns\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274684 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-cni-dir\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274723 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274762 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274777 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-cni-binary-copy\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274818 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274834 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274840 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-mcd-auth-proxy-config\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274847 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-var-lib-cni-bin\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274874 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-multus-certs\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274895 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-host-run-k8s-cni-cncf-io\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.274968 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275140 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275199 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275229 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275309 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04b1629d-0184-4975-8d4b-7a32913e7389-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275444 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-multus-daemon-config\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.275882 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.281047 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.282674 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-proxy-tls\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.293703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6j6\" (UniqueName: \"kubernetes.io/projected/bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877-kube-api-access-nm6j6\") pod \"machine-config-daemon-rppjz\" (UID: \"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\") " pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.295582 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchqf\" (UniqueName: \"kubernetes.io/projected/5ba84858-caaa-4fba-8eaf-9f7ddece0b3a-kube-api-access-dchqf\") pod \"multus-ltw7d\" (UID: \"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\") " pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.299254 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.301878 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88f6w\" (UniqueName: \"kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w\") pod \"ovnkube-node-pwcs2\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.304101 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2tcn\" (UniqueName: \"kubernetes.io/projected/04b1629d-0184-4975-8d4b-7a32913e7389-kube-api-access-h2tcn\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.319662 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.324958 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ltw7d" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.330723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.330909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.330970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.331035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.331115 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.336150 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: W0202 13:01:26.337339 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba84858_caaa_4fba_8eaf_9f7ddece0b3a.slice/crio-7c1e0a9b51e6a17d1d4ddea80a323e586d38fb5a143aeb26bc485a08471c30fd WatchSource:0}: Error finding container 7c1e0a9b51e6a17d1d4ddea80a323e586d38fb5a143aeb26bc485a08471c30fd: Status 404 returned error can't find the container with id 7c1e0a9b51e6a17d1d4ddea80a323e586d38fb5a143aeb26bc485a08471c30fd Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.343318 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.352511 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.353876 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: W0202 13:01:26.364202 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc8c3bf4_0f02_47a1_b8b8_1e40a8daa877.slice/crio-088d31a818c8476f9d984933c4a955becb0ddb7e01581d5b834cf1979d30cd8a WatchSource:0}: Error finding container 088d31a818c8476f9d984933c4a955becb0ddb7e01581d5b834cf1979d30cd8a: Status 404 returned error can't find the container with id 088d31a818c8476f9d984933c4a955becb0ddb7e01581d5b834cf1979d30cd8a Feb 02 13:01:26 crc kubenswrapper[4721]: W0202 13:01:26.366612 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb15bc48d_f88d_4b38_a9e1_00bb00b88a52.slice/crio-1aec1a12b2d4ba708a47d40a6a1a8e146d69e8b9d3bc97bc1257a9e8fc573862 WatchSource:0}: Error finding container 1aec1a12b2d4ba708a47d40a6a1a8e146d69e8b9d3bc97bc1257a9e8fc573862: Status 404 returned error can't find the container with id 1aec1a12b2d4ba708a47d40a6a1a8e146d69e8b9d3bc97bc1257a9e8fc573862 Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.368726 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:11:50.765167107 +0000 UTC Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.376873 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.392060 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.405734 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.408610 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:26 crc kubenswrapper[4721]: E0202 13:01:26.408735 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.420277 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.434130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.434181 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.434194 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.434215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.434228 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.440639 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.456450 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.472049 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.483817 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.497538 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.509011 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.528428 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.536795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.536839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.536852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.536870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.536882 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.574354 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerStarted","Data":"0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.575119 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerStarted","Data":"7c1e0a9b51e6a17d1d4ddea80a323e586d38fb5a143aeb26bc485a08471c30fd"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.575674 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sgp8m" event={"ID":"13f3ec54-a7fb-4236-9583-827d960b2086","Type":"ContainerStarted","Data":"ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.577005 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b" exitCode=0 Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.577083 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.577107 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"1aec1a12b2d4ba708a47d40a6a1a8e146d69e8b9d3bc97bc1257a9e8fc573862"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.579041 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.579083 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"088d31a818c8476f9d984933c4a955becb0ddb7e01581d5b834cf1979d30cd8a"} Feb 02 13:01:26 crc kubenswrapper[4721]: E0202 13:01:26.585252 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.598173 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.616423 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.634890 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.638572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.638604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.638612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.638626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.638637 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.648552 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.661979 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.675294 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.687734 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.707749 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.722873 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.741396 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.741436 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.741446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.741459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.741475 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.742456 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.758890 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.782291 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.794721 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.805782 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.818940 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.832530 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.844059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.844123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.844134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.844151 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.844164 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.845125 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.869824 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.886621 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.900304 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.913358 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.929620 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.944526 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.946107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.946133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.946141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.946155 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.946165 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:26Z","lastTransitionTime":"2026-02-02T13:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.959775 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.975731 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.977894 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:26 crc kubenswrapper[4721]: E0202 13:01:26.978056 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:34.978030956 +0000 UTC m=+35.280545345 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:26 crc kubenswrapper[4721]: I0202 13:01:26.989943 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:26Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.007091 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.026519 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.048572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.048798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.048875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.048944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.049002 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.078496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.078710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.078659 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.078809 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:35.07879143 +0000 UTC m=+35.381305819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.078955 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079077 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:35.079039916 +0000 UTC m=+35.381554355 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.079198 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.079227 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079322 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079394 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079432 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079471 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:35.079462788 +0000 UTC m=+35.381977177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079477 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079516 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079531 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.079590 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:35.079573101 +0000 UTC m=+35.382087490 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.151640 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.151674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.151683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.151697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.151707 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.253852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.253919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.253928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.253942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.253951 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.275570 4721 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.275948 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist podName:04b1629d-0184-4975-8d4b-7a32913e7389 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:27.775922841 +0000 UTC m=+28.078437230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-x4lhg" (UID: "04b1629d-0184-4975-8d4b-7a32913e7389") : failed to sync configmap cache: timed out waiting for the condition Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.356335 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.356378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.356388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.356405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.356418 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.369777 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:04:00.727329603 +0000 UTC Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.380090 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.385373 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.408840 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.408949 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.409120 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:27 crc kubenswrapper[4721]: E0202 13:01:27.409258 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.458761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.458815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.458831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.458857 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.458873 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.562365 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.562904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.562915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.562934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.562947 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592693 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592753 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592768 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592794 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.592806 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.594851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.610395 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.622676 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.644556 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.658449 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.665568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.665626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.665640 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.665657 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.665670 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.675055 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.688610 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.702875 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.718432 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.732325 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.742503 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.759755 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.768223 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.768302 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.768315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.768334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.768348 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.773146 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.787553 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.790196 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.790827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04b1629d-0184-4975-8d4b-7a32913e7389-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4lhg\" (UID: \"04b1629d-0184-4975-8d4b-7a32913e7389\") " pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.802668 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:27Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.834543 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.873056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.873137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.873147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.873161 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.873172 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.978821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.978875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.978885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.978907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:27 crc kubenswrapper[4721]: I0202 13:01:27.978921 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:27Z","lastTransitionTime":"2026-02-02T13:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.081667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.081721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.081735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.081754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.081768 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.184044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.184461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.184471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.184489 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.184499 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.288221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.288280 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.288293 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.288313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.288323 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.370360 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:36:32.808931209 +0000 UTC Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.390705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.390753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.390763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.390784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.390796 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.409486 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:28 crc kubenswrapper[4721]: E0202 13:01:28.409616 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.493598 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.493639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.493648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.493663 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.493672 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.596800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.596838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.596847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.596862 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.596874 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.599691 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4" exitCode=0 Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.599789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.599856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerStarted","Data":"e357aed1992a77065bb25095d84e47a76f6e642eb4294de7abc9e22b6f097e1f"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.625712 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.642600 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.656012 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.672560 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.687916 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.696150 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-48kgl"] Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.696574 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.698048 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.698261 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.698462 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699466 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699734 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.699814 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.704414 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.717712 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.738408 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.753021 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.766875 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.780421 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.799764 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.800426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09b4ea41-ceb5-481a-899e-c2876ced6d49-host\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.800472 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/09b4ea41-ceb5-481a-899e-c2876ced6d49-serviceca\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.800517 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6j7\" (UniqueName: \"kubernetes.io/projected/09b4ea41-ceb5-481a-899e-c2876ced6d49-kube-api-access-8p6j7\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.801956 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.801978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.801986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.802002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.802012 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.813606 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.825574 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.845179 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.858026 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.869343 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.883267 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.896157 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.901937 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09b4ea41-ceb5-481a-899e-c2876ced6d49-host\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.902027 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09b4ea41-ceb5-481a-899e-c2876ced6d49-host\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.902161 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/09b4ea41-ceb5-481a-899e-c2876ced6d49-serviceca\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/09b4ea41-ceb5-481a-899e-c2876ced6d49-serviceca\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903458 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6j7\" (UniqueName: \"kubernetes.io/projected/09b4ea41-ceb5-481a-899e-c2876ced6d49-kube-api-access-8p6j7\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903828 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.903922 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:28Z","lastTransitionTime":"2026-02-02T13:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.910249 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.921811 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6j7\" (UniqueName: \"kubernetes.io/projected/09b4ea41-ceb5-481a-899e-c2876ced6d49-kube-api-access-8p6j7\") pod \"node-ca-48kgl\" (UID: \"09b4ea41-ceb5-481a-899e-c2876ced6d49\") " pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.922375 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.937564 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.950170 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.964153 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:28 crc kubenswrapper[4721]: I0202 13:01:28.986848 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.001569 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:28Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.006508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.006559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.006578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.006601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.006619 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.013749 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-48kgl" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.018950 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: W0202 13:01:29.030851 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09b4ea41_ceb5_481a_899e_c2876ced6d49.slice/crio-394544a1b172ae42c05f9a363db8b77471aa0ca9e90527859d81be99582e16f1 WatchSource:0}: Error finding container 394544a1b172ae42c05f9a363db8b77471aa0ca9e90527859d81be99582e16f1: Status 404 returned error can't find the container with id 394544a1b172ae42c05f9a363db8b77471aa0ca9e90527859d81be99582e16f1 Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.037756 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.048901 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.108703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.108741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.108753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.108769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.108782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.227625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.232653 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.232709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.232729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.232740 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.335635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.335678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.335689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.335706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.335717 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.373027 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:30:55.682887488 +0000 UTC Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.409486 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.409599 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:29 crc kubenswrapper[4721]: E0202 13:01:29.409765 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:29 crc kubenswrapper[4721]: E0202 13:01:29.409616 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.439281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.439339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.439351 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.439376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.439391 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.542086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.542143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.542153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.542174 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.542187 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.606003 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb" exitCode=0 Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.606119 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.612051 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.613511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-48kgl" event={"ID":"09b4ea41-ceb5-481a-899e-c2876ced6d49","Type":"ContainerStarted","Data":"cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.613540 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-48kgl" event={"ID":"09b4ea41-ceb5-481a-899e-c2876ced6d49","Type":"ContainerStarted","Data":"394544a1b172ae42c05f9a363db8b77471aa0ca9e90527859d81be99582e16f1"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.624872 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.641970 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.644757 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.644803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.644815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.644833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.644845 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.660331 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.675474 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.690429 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.704593 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.717205 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.734105 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.750600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.750637 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.750646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.750661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.750670 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.752546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.773773 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.787320 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.806905 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.825193 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.838821 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.852874 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.854117 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.854149 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.854159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.854174 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.854186 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.870377 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.885461 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.899319 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.911624 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.930143 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.943437 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.956983 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.959148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.959168 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.959178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.959195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.959205 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:29Z","lastTransitionTime":"2026-02-02T13:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.971781 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:01:29 crc kubenswrapper[4721]: I0202 13:01:29.982806 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:29Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.002347 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.015433 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.027821 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.042367 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.053739 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.061928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.062429 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.062442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.062462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.062478 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.066835 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.085754 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.109032 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.122030 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.133791 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.145805 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.164126 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.166190 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.166252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.166267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.166311 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.166328 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.177399 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.191184 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.212461 4721 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.212546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.263635 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.282010 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.286276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.286314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.286325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.286345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.286358 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.310517 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.332102 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.352351 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.369122 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.373855 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:18:23.560043886 +0000 UTC Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.385060 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.389587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.389620 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.389632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.389649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.389661 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.411844 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:30 crc kubenswrapper[4721]: E0202 13:01:30.412104 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.432724 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.447423 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.459678 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.474623 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.491894 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.493661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.493696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.493708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.493725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.493735 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.506441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.540593 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.582901 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.595715 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.595748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.595758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.595773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.595783 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.619616 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b" exitCode=0 Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.619667 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.622875 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.662759 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.698101 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.698136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.698146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.698160 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.698169 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.711312 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.742550 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.780808 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.800537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.800578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.800587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.800604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.800613 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.820265 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.861240 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.901569 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.902695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.902722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.902733 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.902748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.902759 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:30Z","lastTransitionTime":"2026-02-02T13:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.942297 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:30 crc kubenswrapper[4721]: I0202 13:01:30.981851 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:30Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.005670 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.005713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.005724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.005740 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.005751 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.021545 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.065274 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.109863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.109939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.109965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.109998 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.110024 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.110934 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.143311 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.189597 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.212769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.212823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.212835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.212890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.212908 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.226149 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.262732 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.304097 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.315700 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.315761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.315777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.315797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.315810 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.342750 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.374782 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:11:06.699857754 +0000 UTC Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.390705 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.409084 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.409107 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:31 crc kubenswrapper[4721]: E0202 13:01:31.409288 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:31 crc kubenswrapper[4721]: E0202 13:01:31.409429 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.419745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.419787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.419797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.419816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.419826 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.422557 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.460723 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.521833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.521880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.521889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.521910 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.521928 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.624894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.624927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.624935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.624950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.624962 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.631102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.631430 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.634451 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89" exitCode=0 Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.634503 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.649107 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.704514 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.709832 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727653 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.727817 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.745797 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.773939 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.792206 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.804139 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.820964 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.830656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.830707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.830719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.830736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.830789 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.836177 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.865733 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.903001 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.935267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.935305 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.935314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.935329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.935340 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:31Z","lastTransitionTime":"2026-02-02T13:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.941725 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:31 crc kubenswrapper[4721]: I0202 13:01:31.986138 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:31Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.034889 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.039035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.039144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.039158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.039181 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.039196 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.062964 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.104739 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.141418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.141455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.141466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.141482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.141493 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.144130 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.195817 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.224427 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.243497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.243543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.243551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.243569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.243578 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.264250 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.306215 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.343874 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.346883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.346962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.346987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.347017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.347043 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.375156 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:47:46.445797728 +0000 UTC Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.382465 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.408746 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:32 crc kubenswrapper[4721]: E0202 13:01:32.408950 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.423769 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.450882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.450942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.450954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.450980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.451006 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.468222 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.511373 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.544750 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.554149 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.554225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.554236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.554254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.554266 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.590401 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.623701 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.642209 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868" exitCode=0 Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.642306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.642369 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.642949 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.656196 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.656252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.656266 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.656289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.656369 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.659451 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.686693 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.701839 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.742269 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.758976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.759030 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.759040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.759060 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.759094 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.784490 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.824480 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.862793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.862845 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.862855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.862875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.862886 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.868253 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.901198 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.939753 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.965427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.965461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.965469 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.965484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.965493 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:32Z","lastTransitionTime":"2026-02-02T13:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:32 crc kubenswrapper[4721]: I0202 13:01:32.982835 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:32Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.023441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.063043 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.068175 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.068234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.068245 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.068260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.068270 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.101574 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.144606 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.170714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.170760 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.170772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.170792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.170805 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.183543 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.221811 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.270639 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.273681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.273731 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.273743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.273765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.273778 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.303640 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.347597 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.375360 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:17:17.151726756 +0000 UTC Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.376399 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.376433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.376447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.376471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.376488 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.383964 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.409277 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.409352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:33 crc kubenswrapper[4721]: E0202 13:01:33.409462 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:33 crc kubenswrapper[4721]: E0202 13:01:33.409529 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.420949 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.461689 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.478821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.478872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.478881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.478900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.478912 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.502772 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.542764 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.581929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.581982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.581997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.582017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.582030 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.586172 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.623589 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.648715 4721 generic.go:334] "Generic (PLEG): container finished" podID="04b1629d-0184-4975-8d4b-7a32913e7389" containerID="5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d" exitCode=0 Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.648790 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerDied","Data":"5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.648864 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.665973 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.689316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.689829 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.689839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.689858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.689872 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.701360 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.740998 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.792029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.792085 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.792094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.792107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.792116 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.796898 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.825968 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.867413 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.894561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.894592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.894601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.894614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.894623 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.903311 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.940690 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.985646 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:33Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.997573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.997769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.997864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.997948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:33 crc kubenswrapper[4721]: I0202 13:01:33.998037 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:33Z","lastTransitionTime":"2026-02-02T13:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.023431 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.086451 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.099003 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.100368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.100481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.100549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.100617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.100672 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.138793 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.188390 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.202980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.203030 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.203041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.203058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.203085 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.220655 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.258873 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.299343 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.304685 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.304714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.304722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.304736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.304746 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.344459 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.376379 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:23:20.408202988 +0000 UTC Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.381786 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.406718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.406772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.406788 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.406809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.406826 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.409057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:34 crc kubenswrapper[4721]: E0202 13:01:34.409177 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.426791 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.465328 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.509037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.509083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.509094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.509110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.509121 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.611098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.611156 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.611170 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.611190 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.611204 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.657142 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" event={"ID":"04b1629d-0184-4975-8d4b-7a32913e7389","Type":"ContainerStarted","Data":"54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.660899 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/0.log" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.674603 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667" exitCode=1 Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.674708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.675736 4721 scope.go:117] "RemoveContainer" containerID="364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.677878 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.692753 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.706706 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.714709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.714741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.714750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.714762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.714772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.721161 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.735674 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.746780 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.794229 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.817704 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.817754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.817768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.817790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.817804 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.834152 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.852951 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.868312 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.901705 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.919494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.919523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.919532 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.919546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.919558 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:34Z","lastTransitionTime":"2026-02-02T13:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.946157 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.982905 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:34Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:34 crc kubenswrapper[4721]: I0202 13:01:34.984259 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:34 crc kubenswrapper[4721]: E0202 13:01:34.984461 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:01:50.984433639 +0000 UTC m=+51.286948028 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.021978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.022049 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.022118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.022159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.022203 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.022996 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.065677 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.085453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.085507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.085533 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.085558 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085635 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085676 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085690 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:51.085672445 +0000 UTC m=+51.388186834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085635 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085733 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085753 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085762 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:51.085750667 +0000 UTC m=+51.388265056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085764 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085832 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085845 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085791 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:51.085778638 +0000 UTC m=+51.388293027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.085928 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:01:51.085910581 +0000 UTC m=+51.388424970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.121524 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.125703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.125742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.125756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.125784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.125794 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.142630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.192177 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.226349 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.227778 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.227835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.227844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.227859 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.227868 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.261524 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.303151 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.330085 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.330125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.330133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.330147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.330155 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.344009 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.376776 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:32:49.418177546 +0000 UTC Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.382954 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.408846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.408846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.409040 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.409110 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432332 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.432745 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"message\\\":\\\"etes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 13:01:34.589960 5998 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 13:01:34.590331 5998 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:34.590798 5998 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:34.590855 5998 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:01:34.590894 5998 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:01:34.590910 5998 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:34.590937 5998 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:01:34.590957 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:34.590979 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:34.591014 5998 factory.go:656] Stopping watch factory\\\\nI0202 13:01:34.591036 5998 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 13:01:34.591054 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:34.591089 5998 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.464313 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.502301 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.534738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.534804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.534823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.534847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.534864 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.538362 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.583920 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.623539 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.636869 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.636907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.636917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.636933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.636946 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.662116 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.679846 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/1.log" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.680583 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/0.log" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.683234 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8" exitCode=1 Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.683272 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.683317 4721 scope.go:117] "RemoveContainer" containerID="364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.683955 4721 scope.go:117] "RemoveContainer" containerID="128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8" Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.684133 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.702229 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.740229 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.740283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.740300 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.740327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.740344 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.743002 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.800297 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://364a12dc7d10ca52af13d40bb44682bd7d6d8458c365e3e2efc89bdbbfb13667\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"message\\\":\\\"etes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 13:01:34.589960 5998 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 13:01:34.590331 5998 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:34.590798 5998 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:34.590855 5998 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 13:01:34.590894 5998 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 13:01:34.590910 5998 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:34.590937 5998 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 13:01:34.590957 5998 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:34.590979 5998 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:34.591014 5998 factory.go:656] Stopping watch factory\\\\nI0202 13:01:34.591036 5998 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 13:01:34.591054 5998 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:34.591089 5998 ovnkube.go:599] Stopped ovnkube\\\\nI0202 13\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.822705 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.844003 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.844062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.844113 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.844138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.844156 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.866971 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.905801 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.944130 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.946825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.946855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.946866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.946882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.946902 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.952483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.952516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.952529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.952543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.952558 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.968120 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.972958 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.973021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.973038 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.973061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.973115 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.986116 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: E0202 13:01:35.988749 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:35Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.993094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.993138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.993323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.993352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:35 crc kubenswrapper[4721]: I0202 13:01:35.993364 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:35Z","lastTransitionTime":"2026-02-02T13:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.016830 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.021549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.021610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.021623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.021643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.021656 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.023493 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.037370 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.041945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.042007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.042021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.042040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.042054 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.058047 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.058269 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.060425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.060464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.060476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.060494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.060509 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.070630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.103375 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.142365 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.162692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.162747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.162758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.162779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.162793 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.182224 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.225573 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.265890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.265961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.265986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.266017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.266038 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.270324 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.368502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.368553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.368575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.368599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.368620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.380161 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:17:43.75048225 +0000 UTC Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.424712 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.424858 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.471158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.471221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.471232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.471245 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.471255 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.575214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.575326 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.575349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.575372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.575389 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.678954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.679039 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.679057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.679125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.679145 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.690208 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/1.log" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.695346 4721 scope.go:117] "RemoveContainer" containerID="128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8" Feb 02 13:01:36 crc kubenswrapper[4721]: E0202 13:01:36.695505 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.721924 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.739910 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.755254 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.771492 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.781698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.781763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.781780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.781803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.781818 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.791887 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.810931 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.824624 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.842753 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.861014 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.875681 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.885016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.885059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.885085 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.885107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.885119 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.894455 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.911725 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.931149 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.947585 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.962766 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:36Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.987925 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.988011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.988037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.988100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:36 crc kubenswrapper[4721]: I0202 13:01:36.988124 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:36Z","lastTransitionTime":"2026-02-02T13:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.091765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.091808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.091835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.091851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.091860 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.195247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.195315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.195340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.195372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.195397 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.298352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.298436 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.298468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.298501 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.298522 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.380853 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:32:28.73693421 +0000 UTC Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.401023 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.401130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.401150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.401176 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.401201 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.409480 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.409480 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:37 crc kubenswrapper[4721]: E0202 13:01:37.409722 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:37 crc kubenswrapper[4721]: E0202 13:01:37.409799 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.504224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.504267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.504280 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.504297 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.504310 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.607391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.607453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.607472 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.607495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.607564 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.709773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.709816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.709826 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.709894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.709908 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.813197 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.813251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.813267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.813289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.813329 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.917593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.917629 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.917639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.917656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:37 crc kubenswrapper[4721]: I0202 13:01:37.917667 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:37Z","lastTransitionTime":"2026-02-02T13:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.020372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.020421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.020430 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.020444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.020454 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.122843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.122883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.122891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.122908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.122917 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.226248 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.226299 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.226312 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.226332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.226346 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.330834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.330889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.330903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.330922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.330933 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.381399 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:41:17.533488669 +0000 UTC Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.409774 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:38 crc kubenswrapper[4721]: E0202 13:01:38.410014 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.433183 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.433226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.433238 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.433255 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.433267 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.535446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.535484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.535504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.535519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.535529 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.638165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.638218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.638233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.638253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.638267 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.741649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.741706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.741719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.741737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.741749 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.845225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.845268 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.845276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.845292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.845301 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.948723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.948777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.948788 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.948808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:38 crc kubenswrapper[4721]: I0202 13:01:38.948820 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:38Z","lastTransitionTime":"2026-02-02T13:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.009121 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc"] Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.009738 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.011612 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.013718 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.035597 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.051717 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5c8\" (UniqueName: \"kubernetes.io/projected/ecfc8a9e-993f-494a-ba91-4132345cee05-kube-api-access-js5c8\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.051811 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecfc8a9e-993f-494a-ba91-4132345cee05-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.051879 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.051909 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.053759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.053832 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.053846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.053866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.053878 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.056280 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.071896 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.086910 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.107399 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.125486 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.144111 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.152464 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5c8\" (UniqueName: \"kubernetes.io/projected/ecfc8a9e-993f-494a-ba91-4132345cee05-kube-api-access-js5c8\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.152496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecfc8a9e-993f-494a-ba91-4132345cee05-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.152528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.152547 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.153207 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.153610 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecfc8a9e-993f-494a-ba91-4132345cee05-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.156207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.156236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.156246 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.156262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.156272 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.165377 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.168918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecfc8a9e-993f-494a-ba91-4132345cee05-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.176521 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5c8\" (UniqueName: \"kubernetes.io/projected/ecfc8a9e-993f-494a-ba91-4132345cee05-kube-api-access-js5c8\") pod \"ovnkube-control-plane-749d76644c-sz7tc\" (UID: \"ecfc8a9e-993f-494a-ba91-4132345cee05\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.184154 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.196545 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.215990 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.228692 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.241203 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.255180 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.259489 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.259526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.259542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.259563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.259577 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.268334 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.279445 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:39Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.324826 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" Feb 02 13:01:39 crc kubenswrapper[4721]: W0202 13:01:39.339463 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfc8a9e_993f_494a_ba91_4132345cee05.slice/crio-64d78dd426cead0bb3cc56e04bb023ff7c0c3ad87f5f83a45c47481343606c56 WatchSource:0}: Error finding container 64d78dd426cead0bb3cc56e04bb023ff7c0c3ad87f5f83a45c47481343606c56: Status 404 returned error can't find the container with id 64d78dd426cead0bb3cc56e04bb023ff7c0c3ad87f5f83a45c47481343606c56 Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.362506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.362548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.362561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.362580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.362593 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.382457 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:43:47.471245013 +0000 UTC Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.409120 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.409153 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:39 crc kubenswrapper[4721]: E0202 13:01:39.409277 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:39 crc kubenswrapper[4721]: E0202 13:01:39.409841 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.465111 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.465156 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.465167 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.465187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.465199 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.568732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.568791 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.568810 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.568834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.568852 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.671894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.671967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.671990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.672021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.672043 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.709533 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" event={"ID":"ecfc8a9e-993f-494a-ba91-4132345cee05","Type":"ContainerStarted","Data":"02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.709581 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" event={"ID":"ecfc8a9e-993f-494a-ba91-4132345cee05","Type":"ContainerStarted","Data":"64d78dd426cead0bb3cc56e04bb023ff7c0c3ad87f5f83a45c47481343606c56"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.774315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.774364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.774378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.774400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.774415 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.877366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.877410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.877424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.877440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.877452 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.980201 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.980258 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.980270 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.980296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:39 crc kubenswrapper[4721]: I0202 13:01:39.980309 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:39Z","lastTransitionTime":"2026-02-02T13:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.083044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.083122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.083142 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.083169 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.083186 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.105557 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xqz79"] Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.106200 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.106280 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.124497 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.142442 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.158380 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.162217 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbwws\" (UniqueName: \"kubernetes.io/projected/bfab3ffb-8798-423d-9b55-83868b76a14e-kube-api-access-vbwws\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.162286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.175989 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.186826 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.186859 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.186868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.186882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.186892 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.196241 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.213868 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.230456 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.252267 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.263728 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbwws\" (UniqueName: \"kubernetes.io/projected/bfab3ffb-8798-423d-9b55-83868b76a14e-kube-api-access-vbwws\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.263774 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.263895 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.263944 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:01:40.763929582 +0000 UTC m=+41.066443981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.266508 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.281607 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbwws\" (UniqueName: \"kubernetes.io/projected/bfab3ffb-8798-423d-9b55-83868b76a14e-kube-api-access-vbwws\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.281990 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.289165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.289205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.289214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.289230 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.289241 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.298896 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.311251 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.324974 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.355842 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.378405 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.382686 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:51:52.158317144 +0000 UTC Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.391768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.391801 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.391809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.391823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.391832 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.398112 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.409364 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.409501 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.416199 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.430833 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.443562 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.454551 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.465539 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.477808 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.491783 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.493756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.493803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.493820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.493843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.493861 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.505636 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.518091 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.537963 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.557690 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.577056 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.591706 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.595841 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.595878 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.595887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.595903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.595914 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.607942 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.628121 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.643763 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.659408 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.680301 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.699135 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.699201 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.699212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.699232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.699246 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.714851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" event={"ID":"ecfc8a9e-993f-494a-ba91-4132345cee05","Type":"ContainerStarted","Data":"b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.735376 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.749730 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.759487 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.770855 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.772390 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.772585 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:40 crc kubenswrapper[4721]: E0202 13:01:40.772654 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:01:41.772636461 +0000 UTC m=+42.075150860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.791088 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.801636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.801721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.801765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.802466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.802517 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.810935 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.830597 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.847526 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.869724 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.892513 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.906228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.906300 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.906318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.906345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.906362 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:40Z","lastTransitionTime":"2026-02-02T13:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.907517 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.928055 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.940812 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.953534 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.965453 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.977575 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:40 crc kubenswrapper[4721]: I0202 13:01:40.990210 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:40Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.009332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.009400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.009414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.009433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.009445 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.113211 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.113287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.113307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.113341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.113361 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.216775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.216847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.216865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.216894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.216914 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.319909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.319959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.319971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.319989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.320002 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.383160 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:00:10.033090849 +0000 UTC Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.409612 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.409680 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:41 crc kubenswrapper[4721]: E0202 13:01:41.409754 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.409613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:41 crc kubenswrapper[4721]: E0202 13:01:41.409889 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:41 crc kubenswrapper[4721]: E0202 13:01:41.410119 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.423234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.423278 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.423294 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.423317 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.423335 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.526907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.526968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.526989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.527019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.527043 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.630250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.630325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.630340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.630362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.630379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.752369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.752496 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.752518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.752542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.752559 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.785042 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:41 crc kubenswrapper[4721]: E0202 13:01:41.785425 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:41 crc kubenswrapper[4721]: E0202 13:01:41.785530 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:01:43.785507468 +0000 UTC m=+44.088021857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.855952 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.856041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.856105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.856140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.856164 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.959216 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.959261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.959274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.959290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:41 crc kubenswrapper[4721]: I0202 13:01:41.959300 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:41Z","lastTransitionTime":"2026-02-02T13:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.061549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.061620 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.061632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.061678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.061693 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.164735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.164786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.164797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.164813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.164822 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.268215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.268276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.268288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.268308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.268330 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.370990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.371041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.371053 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.371097 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.371112 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.383374 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:34:48.641660503 +0000 UTC Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.409391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:42 crc kubenswrapper[4721]: E0202 13:01:42.409628 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.474706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.474783 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.474807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.474831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.474852 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.578633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.578702 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.578726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.578758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.578782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.685772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.685865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.685891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.685923 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.685947 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.789526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.789572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.789583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.789601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.789612 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.892974 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.893036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.893052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.893106 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.893125 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.996106 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.996143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.996154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.996169 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:42 crc kubenswrapper[4721]: I0202 13:01:42.996179 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:42Z","lastTransitionTime":"2026-02-02T13:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.098939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.098975 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.098988 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.099004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.099017 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.202013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.202059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.202098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.202124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.202138 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.307571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.307613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.307624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.307643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.307658 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.383536 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:33:17.06348755 +0000 UTC Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.408626 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:43 crc kubenswrapper[4721]: E0202 13:01:43.408765 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.408956 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.408626 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:43 crc kubenswrapper[4721]: E0202 13:01:43.409259 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:43 crc kubenswrapper[4721]: E0202 13:01:43.409436 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.410720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.410761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.410773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.410787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.410799 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.514212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.514259 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.514271 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.514288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.514301 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.617570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.617621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.617633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.617651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.617665 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.721205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.721287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.721301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.721329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.721347 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.806568 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:43 crc kubenswrapper[4721]: E0202 13:01:43.806887 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:43 crc kubenswrapper[4721]: E0202 13:01:43.806993 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:01:47.806964847 +0000 UTC m=+48.109479276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.825230 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.825655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.825836 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.826058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.826279 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.929955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.930019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.930034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.930057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:43 crc kubenswrapper[4721]: I0202 13:01:43.930102 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:43Z","lastTransitionTime":"2026-02-02T13:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.032793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.032853 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.032868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.032890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.032909 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.135856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.135919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.135936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.135960 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.135977 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.239780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.239865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.239894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.239928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.239955 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.342666 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.342754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.342779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.342815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.342842 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.384639 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:12:16.247949756 +0000 UTC Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.409056 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:44 crc kubenswrapper[4721]: E0202 13:01:44.409312 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.445195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.445244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.445253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.445266 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.445278 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.547550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.547604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.547621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.547644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.547660 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.650192 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.650223 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.650231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.650245 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.650256 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.753579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.753632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.753650 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.753674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.753692 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.856799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.856846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.856855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.856872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.856883 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.960193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.960232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.960242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.960256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:44 crc kubenswrapper[4721]: I0202 13:01:44.960265 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:44Z","lastTransitionTime":"2026-02-02T13:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.068754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.068824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.068836 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.068858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.068872 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.171668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.171725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.171735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.171747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.171757 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.275507 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.275570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.275606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.275638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.275661 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.382809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.382900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.382924 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.382953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.382976 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.385576 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:16:31.892759916 +0000 UTC Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.409053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.409165 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.409126 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:45 crc kubenswrapper[4721]: E0202 13:01:45.409261 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:45 crc kubenswrapper[4721]: E0202 13:01:45.409397 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:45 crc kubenswrapper[4721]: E0202 13:01:45.409661 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.486665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.486729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.486788 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.486821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.486844 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.590637 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.590706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.590724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.590750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.590767 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.695033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.695178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.695205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.695236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.695260 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.798045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.798149 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.798188 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.798225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.798247 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.901635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.901687 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.901698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.901723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:45 crc kubenswrapper[4721]: I0202 13:01:45.901736 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:45Z","lastTransitionTime":"2026-02-02T13:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.004462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.004517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.004529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.004550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.004566 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.108525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.108590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.108603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.108628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.108642 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.124049 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.124123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.124139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.124164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.124182 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.148456 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.154173 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.154218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.154233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.154253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.154266 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.172953 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.177479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.177524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.177538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.177565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.177585 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.197180 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.202194 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.202343 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.202419 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.202486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.202544 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.218500 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.223765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.223811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.223829 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.223851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.223863 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.240194 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:46Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.240360 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.242431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.242470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.242482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.242500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.242512 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.345779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.345840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.345850 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.345878 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.345891 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.385992 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:19:41.552084555 +0000 UTC Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.409657 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:46 crc kubenswrapper[4721]: E0202 13:01:46.409792 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.449209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.449264 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.449279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.449298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.449314 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.552323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.552358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.552367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.552379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.552388 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.655464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.655525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.655550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.655579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.655602 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.759484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.759538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.759551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.759572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.759584 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.862949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.863061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.863114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.863144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.863160 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.966564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.966843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.966870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.966902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:46 crc kubenswrapper[4721]: I0202 13:01:46.966926 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:46Z","lastTransitionTime":"2026-02-02T13:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.069989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.070041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.070052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.070084 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.070098 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.173623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.173696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.173714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.173742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.173763 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.277500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.277546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.277561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.277583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.277598 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.381254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.381316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.381336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.381360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.381379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.387055 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:02:16.671589262 +0000 UTC Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.409470 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.409805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:47 crc kubenswrapper[4721]: E0202 13:01:47.409890 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.409978 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:47 crc kubenswrapper[4721]: E0202 13:01:47.410280 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:47 crc kubenswrapper[4721]: E0202 13:01:47.410425 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.484580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.484617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.484630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.484683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.484699 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.587695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.587756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.587768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.587787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.587800 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.690738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.691142 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.691235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.691336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.691402 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.795611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.795691 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.795705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.795730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.795745 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.857818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:47 crc kubenswrapper[4721]: E0202 13:01:47.858127 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:47 crc kubenswrapper[4721]: E0202 13:01:47.858264 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:01:55.85823191 +0000 UTC m=+56.160746339 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.899457 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.899515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.899537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.899560 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:47 crc kubenswrapper[4721]: I0202 13:01:47.899572 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:47Z","lastTransitionTime":"2026-02-02T13:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.003810 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.003903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.003916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.003941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.003958 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.107514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.107582 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.107605 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.107635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.107657 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.210921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.210975 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.210988 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.211009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.211029 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.314412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.314453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.314463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.314483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.314497 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.387697 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:06:55.448701676 +0000 UTC Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.409282 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:48 crc kubenswrapper[4721]: E0202 13:01:48.409994 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.416844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.416896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.416911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.416930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.416946 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.519884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.519941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.519956 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.519977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.519991 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.622117 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.622187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.622209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.622234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.622252 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.725207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.725245 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.725258 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.725279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.725290 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.829374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.829445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.829463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.829483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.829495 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.932509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.932572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.932589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.932690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:48 crc kubenswrapper[4721]: I0202 13:01:48.932710 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:48Z","lastTransitionTime":"2026-02-02T13:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.036193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.036257 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.036279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.036308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.036329 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.139462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.139516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.139534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.139559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.139580 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.242610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.242652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.242662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.242679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.242689 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.346602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.347112 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.347316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.347481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.347624 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.388380 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:09:55.618039191 +0000 UTC Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.408769 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.408869 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.408915 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:49 crc kubenswrapper[4721]: E0202 13:01:49.409673 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:49 crc kubenswrapper[4721]: E0202 13:01:49.409820 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:49 crc kubenswrapper[4721]: E0202 13:01:49.409979 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.451306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.451381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.451420 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.451453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.451473 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.555351 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.555399 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.555408 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.555423 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.555432 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.658660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.659176 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.659435 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.659603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.659757 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.763289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.763403 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.763470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.763499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.763521 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.866559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.866856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.867019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.867257 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.867392 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.970479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.970753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.970993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.971165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:49 crc kubenswrapper[4721]: I0202 13:01:49.971385 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:49Z","lastTransitionTime":"2026-02-02T13:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.074410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.074787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.074918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.075096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.075250 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.178363 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.178405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.178418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.178432 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.178441 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.280636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.281042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.281324 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.281495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.281631 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.384835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.384885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.384893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.384907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.384917 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.389131 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:38:32.19320661 +0000 UTC Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.409729 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:50 crc kubenswrapper[4721]: E0202 13:01:50.409937 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.432683 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.451986 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.471564 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.487452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.487517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.487541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.487572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.487595 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.492679 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.517308 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.538863 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.575433 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.591124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.591173 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.591185 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.591201 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.591213 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.596937 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.615871 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.628190 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.650793 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.666085 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.681305 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.693696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.693734 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.693745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.693761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.693773 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.696925 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.709829 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.730003 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.746445 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:50Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.795673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.795773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.795794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.795819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.795836 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.899054 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.899140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.899158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.899182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.899202 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:50Z","lastTransitionTime":"2026-02-02T13:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:50 crc kubenswrapper[4721]: I0202 13:01:50.996202 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:01:50 crc kubenswrapper[4721]: E0202 13:01:50.996609 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:02:22.996568758 +0000 UTC m=+83.299083187 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.002528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.003138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.003372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.003574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.003897 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.097503 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.097946 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.098418 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.098635 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.097657 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.098943 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099117 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.098053 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.098534 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099487 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099556 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.098738 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099330 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:23.099312724 +0000 UTC m=+83.401827133 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099790 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:23.099745435 +0000 UTC m=+83.402259864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.099832 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:23.099814907 +0000 UTC m=+83.402329326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.100322 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:02:23.100262469 +0000 UTC m=+83.402776898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.108451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.108509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.108529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.108558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.108578 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.211843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.211898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.211915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.211938 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.211954 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.314652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.314687 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.314695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.314712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.314722 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.389825 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:13:13.507359756 +0000 UTC Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.408843 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.409237 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.409268 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.409369 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.409568 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.409820 4721 scope.go:117] "RemoveContainer" containerID="128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8" Feb 02 13:01:51 crc kubenswrapper[4721]: E0202 13:01:51.409829 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.417707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.417764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.417774 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.417794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.417807 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.520414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.520889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.520939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.520958 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.520971 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.623120 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.623165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.623178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.623193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.623202 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.725945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.726007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.726026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.726050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.726098 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.790500 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/1.log" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.793345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.793505 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.809632 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.827680 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.829468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.829515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.829527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.829548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.829564 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.848334 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.866085 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.888583 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.913030 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.927219 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.932442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.932487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.932497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.932518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.932534 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:51Z","lastTransitionTime":"2026-02-02T13:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.954935 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.970593 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:51 crc kubenswrapper[4721]: I0202 13:01:51.986745 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.000813 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:51Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.011035 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.024122 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.035494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.035545 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.035557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.035578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.035593 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.044733 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.056944 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.068443 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.078947 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.138013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.138056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.138080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.138103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.138146 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.241417 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.241479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.241503 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.241526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.241543 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.343712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.343747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.343755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.343771 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.343782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.390258 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:16:26.868556933 +0000 UTC Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.408775 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:52 crc kubenswrapper[4721]: E0202 13:01:52.408900 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.447243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.447314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.447330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.447353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.447372 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.482674 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.550693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.550727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.550735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.550748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.550758 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.653737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.653797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.653807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.653827 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.653841 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.756394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.756468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.756480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.756503 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.756527 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.799360 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/2.log" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.800364 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/1.log" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.803982 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" exitCode=1 Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.804039 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.804114 4721 scope.go:117] "RemoveContainer" containerID="128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.805524 4721 scope.go:117] "RemoveContainer" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" Feb 02 13:01:52 crc kubenswrapper[4721]: E0202 13:01:52.805851 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.827331 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.845243 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.858608 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.858639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.858651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.858668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.858684 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.876466 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.896856 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.916227 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.939973 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.960805 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.962103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.962238 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.962325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.962404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.962499 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:52Z","lastTransitionTime":"2026-02-02T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:52 crc kubenswrapper[4721]: I0202 13:01:52.978280 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:52Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.011855 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.031129 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.048756 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.064967 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.065774 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.065823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.065835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.065856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.065871 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.088218 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.105489 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.122321 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.136427 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.151289 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.168198 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.168265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.168288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.168320 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.168342 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.272212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.272273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.272290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.272314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.272331 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.375645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.375713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.375731 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.375758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.375782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.390497 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:20:25.116386108 +0000 UTC Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.409041 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.409171 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.409057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:53 crc kubenswrapper[4721]: E0202 13:01:53.409314 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:53 crc kubenswrapper[4721]: E0202 13:01:53.409482 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:53 crc kubenswrapper[4721]: E0202 13:01:53.409761 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.478202 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.479152 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.479213 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.479224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.479243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.479254 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.499722 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.510558 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.527652 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.548752 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.568032 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.581422 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.581461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.581471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.581490 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.581501 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.584903 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.600582 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.614113 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.647127 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://128d117e2ed6ed05a65511943fe20f8f9d5cd3f62044941596a47d3b7a03eea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:35Z\\\",\\\"message\\\":\\\"ace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.608892 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 13:01:35.609124 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 13:01:35.609151 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 13:01:35.609186 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 13:01:35.609190 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 13:01:35.609206 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 13:01:35.609217 6180 factory.go:656] Stopping watch factory\\\\nI0202 13:01:35.609232 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 13:01:35.609240 6180 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 13:01:35.609246 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 13:01:35.609251 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 13:01:35.609263 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 13:01:35.609269 6180 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 13:01:35.608240 6180 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.667402 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.685169 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.685235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.685253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.685279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.685303 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.690093 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.708481 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.724238 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.741911 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.776643 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.787535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.787593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.787610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.787635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.787654 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.795442 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.811340 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/2.log" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.817717 4721 scope.go:117] "RemoveContainer" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" Feb 02 13:01:53 crc kubenswrapper[4721]: E0202 13:01:53.818064 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.820598 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.845490 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.882459 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.889997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.890056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.890095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.890122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.890141 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.902987 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.916022 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.927901 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.942281 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.963661 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.980155 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993054 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:53Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993608 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:53 crc kubenswrapper[4721]: I0202 13:01:53.993660 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:53Z","lastTransitionTime":"2026-02-02T13:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.007058 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.024120 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.043736 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.062392 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.077124 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.096903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.096968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.096987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.097014 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.097034 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.097629 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.115929 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.127994 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.139583 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.156789 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:54Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.200510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.200564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.200580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.200603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.200620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.303418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.303470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.303487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.303509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.303523 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.390840 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:40:37.637632854 +0000 UTC Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.406734 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.406770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.406784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.406802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.406818 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.409293 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:54 crc kubenswrapper[4721]: E0202 13:01:54.409453 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.510059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.510154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.510194 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.510226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.510249 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.612973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.613033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.613056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.613114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.613140 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.716557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.716830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.716853 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.716885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.716912 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.819469 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.819531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.819542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.819565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.819577 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.922299 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.922639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.922711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.922844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:54 crc kubenswrapper[4721]: I0202 13:01:54.922931 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:54Z","lastTransitionTime":"2026-02-02T13:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.025994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.026034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.026046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.026154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.026171 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.130425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.130487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.130499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.130522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.130537 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.234461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.234529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.234544 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.234571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.234589 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.337957 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.338028 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.338046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.338111 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.338135 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.391412 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:39:40.4444822 +0000 UTC Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.409774 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.409836 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.409910 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:55 crc kubenswrapper[4721]: E0202 13:01:55.410055 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:55 crc kubenswrapper[4721]: E0202 13:01:55.410224 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:55 crc kubenswrapper[4721]: E0202 13:01:55.410336 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.441385 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.441453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.441503 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.441531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.441554 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.545137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.545224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.545237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.545254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.545269 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.648647 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.648721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.648742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.648772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.648794 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.751770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.751872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.751886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.751903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.751918 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.854540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.854924 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.854948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.854977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.855002 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:55Z","lastTransitionTime":"2026-02-02T13:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:55 crc kubenswrapper[4721]: I0202 13:01:55.859510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:55 crc kubenswrapper[4721]: E0202 13:01:55.859758 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:55 crc kubenswrapper[4721]: E0202 13:01:55.859848 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:02:11.859823706 +0000 UTC m=+72.162338125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.356478 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.356539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.356552 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.356570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.356582 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.358442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.358586 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.358648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.358730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.358800 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.373590 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.378321 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.378365 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.378375 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.378392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.378405 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.391469 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.391734 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:31:08.632419738 +0000 UTC Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.395881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.395930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.395942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.395962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.395975 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.409488 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.409596 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.411548 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.416940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.416998 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.417013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.417032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.417044 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.432183 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.436792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.436835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.436847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.436863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.436874 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.451164 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:01:56Z is after 2025-08-24T17:21:41Z" Feb 02 13:01:56 crc kubenswrapper[4721]: E0202 13:01:56.451280 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.458759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.458800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.458812 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.458828 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.458841 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.561177 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.561237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.561285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.561319 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.561342 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.664005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.664043 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.664056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.664092 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.664104 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.767764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.767815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.767831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.767851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.767865 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.870228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.870287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.870310 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.870340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.870364 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.973773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.973851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.973876 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.973908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:56 crc kubenswrapper[4721]: I0202 13:01:56.973931 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:56Z","lastTransitionTime":"2026-02-02T13:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.076802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.076833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.076842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.076856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.076866 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.179619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.179676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.179693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.179715 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.179732 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.283340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.283391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.283408 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.283432 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.283449 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.386709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.386994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.387056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.387141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.387197 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.391915 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:39:01.064491913 +0000 UTC Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.409462 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:57 crc kubenswrapper[4721]: E0202 13:01:57.409613 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.409468 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.409803 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:57 crc kubenswrapper[4721]: E0202 13:01:57.409988 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:57 crc kubenswrapper[4721]: E0202 13:01:57.409821 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.489684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.489720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.489739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.489754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.489766 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.593154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.593226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.593252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.593282 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.593306 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.696551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.696605 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.696621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.696642 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.696657 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.800441 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.800516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.800535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.800558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.800570 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.904108 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.904172 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.904189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.904215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:57 crc kubenswrapper[4721]: I0202 13:01:57.904228 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:57Z","lastTransitionTime":"2026-02-02T13:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.007763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.007802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.007812 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.007829 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.007841 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.110682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.110730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.110741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.110761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.110772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.213562 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.213764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.213793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.213830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.213857 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.315966 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.316009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.316020 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.316036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.316047 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.392303 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:25:46.255348908 +0000 UTC Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.409560 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:01:58 crc kubenswrapper[4721]: E0202 13:01:58.409786 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.419025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.419085 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.419095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.419107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.419117 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.522004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.522051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.522096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.522118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.522133 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.625595 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.625656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.625667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.625684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.625697 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.729434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.729476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.729487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.729509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.729524 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.833283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.833367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.833386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.833413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.833430 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.936936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.937013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.937040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.937104 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:58 crc kubenswrapper[4721]: I0202 13:01:58.937131 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:58Z","lastTransitionTime":"2026-02-02T13:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.040760 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.040809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.040821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.040839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.040851 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.143528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.143591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.143609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.143630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.143644 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.247274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.247349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.247368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.247393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.247410 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.350354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.350400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.350411 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.350425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.350437 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.392794 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:35:01.208020168 +0000 UTC Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.409456 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.409487 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.409485 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:01:59 crc kubenswrapper[4721]: E0202 13:01:59.409564 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:01:59 crc kubenswrapper[4721]: E0202 13:01:59.409650 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:01:59 crc kubenswrapper[4721]: E0202 13:01:59.409742 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.453846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.453904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.453919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.453941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.453960 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.558158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.558256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.558286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.558313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.558337 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.661583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.662003 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.662296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.662441 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.662598 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.767521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.767889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.768033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.768178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.768284 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.871660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.871735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.871753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.871777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.871794 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.975477 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.975526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.975539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.975560 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:01:59 crc kubenswrapper[4721]: I0202 13:01:59.975572 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:01:59Z","lastTransitionTime":"2026-02-02T13:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.078097 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.078148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.078159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.078174 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.078184 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.181360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.181417 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.181430 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.181452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.181468 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.284628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.284682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.284691 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.284710 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.284725 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.387440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.387515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.387527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.387546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.387559 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.393854 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:46:49.843847152 +0000 UTC Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.409528 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:00 crc kubenswrapper[4721]: E0202 13:02:00.409766 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.427627 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.442162 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.461455 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.477232 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.491677 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.491729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.491746 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.491767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.491782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.493462 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.513235 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.524894 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.541853 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.553565 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.562822 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.574203 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.590566 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.595455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.595498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.595510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.595531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.595545 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.609582 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.626950 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.644189 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.676008 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.694104 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.698096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.698170 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.698190 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.698215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.698232 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.709920 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:00Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.800973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.801055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.801122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.801169 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.801201 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.904511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.904574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.904585 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.904606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:00 crc kubenswrapper[4721]: I0202 13:02:00.904619 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:00Z","lastTransitionTime":"2026-02-02T13:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.007390 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.007462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.007481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.007507 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.007528 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.110522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.110600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.110616 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.110651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.110666 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.213044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.213115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.213128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.213144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.213155 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.315726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.315772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.315781 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.315799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.315811 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.394229 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:44:38.044251768 +0000 UTC Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.409773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.409827 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.409958 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:01 crc kubenswrapper[4721]: E0202 13:02:01.410113 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:01 crc kubenswrapper[4721]: E0202 13:02:01.410231 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:01 crc kubenswrapper[4721]: E0202 13:02:01.410407 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.418589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.418644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.418660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.418682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.419193 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.523241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.523289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.523301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.523318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.523329 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.626960 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.627016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.627037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.627102 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.627125 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.730961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.731033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.731051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.731114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.731157 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.834659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.834714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.834727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.834743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.834753 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.937888 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.937935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.937947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.937965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:01 crc kubenswrapper[4721]: I0202 13:02:01.937978 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:01Z","lastTransitionTime":"2026-02-02T13:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.040479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.040522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.040532 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.040558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.040570 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.143697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.143756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.143770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.143790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.143801 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.246873 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.246939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.246954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.246976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.246993 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.350375 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.350440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.350468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.350504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.350529 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.394430 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:04:27.916590308 +0000 UTC Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.409220 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:02 crc kubenswrapper[4721]: E0202 13:02:02.409525 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.454591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.454648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.454673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.454705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.454729 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.558494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.558531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.558540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.558557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.558568 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.663388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.663447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.663463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.663492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.663512 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.766655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.766716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.766725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.766743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.766778 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.869776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.869815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.869831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.869852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.869864 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.973003 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.973041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.973051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.973093 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:02 crc kubenswrapper[4721]: I0202 13:02:02.973107 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:02Z","lastTransitionTime":"2026-02-02T13:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.076096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.076131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.076142 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.076165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.076175 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.179182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.179218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.179227 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.179244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.179264 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.282133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.282167 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.282175 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.282191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.282202 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.385464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.385503 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.385514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.385535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.385548 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.395332 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:43:02.190844105 +0000 UTC Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.409398 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:03 crc kubenswrapper[4721]: E0202 13:02:03.409670 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.410028 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:03 crc kubenswrapper[4721]: E0202 13:02:03.410229 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.410348 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:03 crc kubenswrapper[4721]: E0202 13:02:03.410468 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.488492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.488618 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.488633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.488651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.488663 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.592681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.592731 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.592749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.592772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.592789 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.697818 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.697903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.697961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.697985 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.698002 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.800806 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.800838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.800847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.800861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.800872 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.904095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.904150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.904163 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.904187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:03 crc kubenswrapper[4721]: I0202 13:02:03.904201 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:03Z","lastTransitionTime":"2026-02-02T13:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.007573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.007635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.007644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.007661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.007671 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.110205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.110274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.110286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.110310 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.110325 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.213376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.213436 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.213487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.213523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.213540 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.317656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.317723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.317735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.317756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.317772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.395990 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:47:15.710102624 +0000 UTC Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.409487 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:04 crc kubenswrapper[4721]: E0202 13:02:04.409620 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.420334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.420359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.420367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.420381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.420392 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.524123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.524160 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.524173 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.524188 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.524199 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.626723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.626757 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.626769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.626784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.626795 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.729803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.729852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.729863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.729887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.729899 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.832277 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.832313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.832323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.832336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.832354 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.934847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.934888 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.934899 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.934914 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:04 crc kubenswrapper[4721]: I0202 13:02:04.934927 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:04Z","lastTransitionTime":"2026-02-02T13:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.037707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.038013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.038141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.038254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.038360 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.140569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.140634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.140645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.140666 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.140679 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.243453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.243516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.243528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.243550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.243564 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.346407 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.346484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.346505 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.346529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.346544 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.397027 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:01:25.734661984 +0000 UTC Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.409438 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.409476 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.409438 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:05 crc kubenswrapper[4721]: E0202 13:02:05.409610 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:05 crc kubenswrapper[4721]: E0202 13:02:05.409684 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:05 crc kubenswrapper[4721]: E0202 13:02:05.409846 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.410753 4721 scope.go:117] "RemoveContainer" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" Feb 02 13:02:05 crc kubenswrapper[4721]: E0202 13:02:05.411107 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.449840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.449929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.449955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.450010 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.450029 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.552910 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.552958 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.552970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.552991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.553005 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.655830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.656313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.656602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.656917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.657190 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.760703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.760755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.760767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.760783 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.760796 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.862573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.862832 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.862930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.863027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.863141 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.965243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.965305 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.965322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.965369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:05 crc kubenswrapper[4721]: I0202 13:02:05.965386 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:05Z","lastTransitionTime":"2026-02-02T13:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.068541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.068848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.068929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.069018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.069148 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.172727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.172773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.172781 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.172796 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.172805 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.275827 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.276176 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.276271 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.276375 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.276442 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.379957 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.380021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.380040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.380104 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.380126 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.397259 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:07:18.786488896 +0000 UTC Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.409507 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.409681 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.482317 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.482346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.482358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.482369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.482378 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.578359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.578421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.578434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.578456 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.578468 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.592600 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.597895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.597949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.597967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.597990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.598006 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.615662 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.619762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.619806 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.619816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.619836 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.619847 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.632551 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.636487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.636533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.636567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.636593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.636610 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.649714 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.653543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.653580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.653591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.653606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.653620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.669300 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:06Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:06 crc kubenswrapper[4721]: E0202 13:02:06.669498 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.671139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.671214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.671231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.671249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.671262 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.774672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.775110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.775332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.775556 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.775721 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.878417 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.878678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.878772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.878862 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.879218 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.982210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.982260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.982271 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.982289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:06 crc kubenswrapper[4721]: I0202 13:02:06.982300 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:06Z","lastTransitionTime":"2026-02-02T13:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.084974 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.085024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.085037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.085055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.085088 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.187456 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.187539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.187551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.187569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.187586 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.290388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.290449 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.290463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.290485 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.290500 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.393267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.393321 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.393341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.393364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.393378 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.398556 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:38:47.412392037 +0000 UTC Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.408985 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.409085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:07 crc kubenswrapper[4721]: E0202 13:02:07.409125 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.409085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:07 crc kubenswrapper[4721]: E0202 13:02:07.409237 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:07 crc kubenswrapper[4721]: E0202 13:02:07.409345 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.496442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.496488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.496498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.496514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.496524 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.599062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.599125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.599137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.599155 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.599167 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.701116 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.701150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.701159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.701172 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.701182 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.803941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.803977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.803986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.803999 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.804017 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.907961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.908021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.908034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.908055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:07 crc kubenswrapper[4721]: I0202 13:02:07.908091 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:07Z","lastTransitionTime":"2026-02-02T13:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.010583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.010646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.010661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.010682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.010693 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.112535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.112574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.112584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.112600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.112610 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.214967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.215413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.215498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.215575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.215720 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.318360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.318564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.318692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.318795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.318869 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.398962 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:09:59.011486593 +0000 UTC Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.408889 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:08 crc kubenswrapper[4721]: E0202 13:02:08.409025 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.421091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.421134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.421145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.421162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.421174 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.523613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.523658 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.523668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.523683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.523698 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.626330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.626410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.626433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.626461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.626478 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.729098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.729438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.729520 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.729599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.729738 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.832212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.832488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.832584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.832799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.833053 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.940010 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.940057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.940094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.940114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:08 crc kubenswrapper[4721]: I0202 13:02:08.940129 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:08Z","lastTransitionTime":"2026-02-02T13:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.044494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.044564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.044574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.044594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.044606 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.147735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.147802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.147822 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.147843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.147856 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.250983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.251045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.251057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.251102 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.251116 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.354186 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.354268 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.354279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.354300 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.354312 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.400568 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:34:18.870162974 +0000 UTC Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.409111 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.409237 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:09 crc kubenswrapper[4721]: E0202 13:02:09.409318 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.409111 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:09 crc kubenswrapper[4721]: E0202 13:02:09.409395 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:09 crc kubenswrapper[4721]: E0202 13:02:09.409595 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.457581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.457645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.457659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.457687 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.457706 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.560018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.560084 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.560097 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.560115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.560128 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.663476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.663546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.663558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.663579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.663595 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.765969 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.766021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.766033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.766051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.766062 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.869589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.869659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.869697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.869718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.869732 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.972655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.972712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.972723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.972736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:09 crc kubenswrapper[4721]: I0202 13:02:09.972745 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:09Z","lastTransitionTime":"2026-02-02T13:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.075450 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.075519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.075528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.075549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.075565 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.178438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.178508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.178526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.178555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.178575 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.281987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.282032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.282044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.282079 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.282090 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.384548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.384595 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.384607 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.384623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.384635 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.401105 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:19:42.97536295 +0000 UTC Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.409549 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:10 crc kubenswrapper[4721]: E0202 13:02:10.409697 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.423628 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.443289 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.455215 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.477297 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.487164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.487197 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.487209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.487242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.487255 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.492832 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.511979 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.523842 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.536650 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.551550 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.572956 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589332 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.589840 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.605442 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.621522 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.639944 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.658564 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.674491 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.688588 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.692392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.692419 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.692427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.692440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.692450 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.704420 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:10Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.794964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.795007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.795016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.795035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.795043 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.898059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.898118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.898130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.898144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:10 crc kubenswrapper[4721]: I0202 13:02:10.898157 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:10Z","lastTransitionTime":"2026-02-02T13:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.001057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.001126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.001138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.001153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.001163 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.103290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.103335 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.103345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.103360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.103370 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.205652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.205689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.205698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.205711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.205721 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.309033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.309103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.309118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.309140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.309156 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.401279 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:17:58.924369139 +0000 UTC Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.409775 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.409776 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:11 crc kubenswrapper[4721]: E0202 13:02:11.409957 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:11 crc kubenswrapper[4721]: E0202 13:02:11.410140 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.410669 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:11 crc kubenswrapper[4721]: E0202 13:02:11.410808 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.411918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.411965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.411973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.411988 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.411997 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.515291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.515353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.515550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.515567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.515578 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.618520 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.618570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.618583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.618601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.618613 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.721189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.721259 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.721272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.721291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.721303 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.824112 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.824168 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.824185 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.824207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.824221 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.926980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.927049 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.927095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.927119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.927132 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:11Z","lastTransitionTime":"2026-02-02T13:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:11 crc kubenswrapper[4721]: I0202 13:02:11.941779 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:11 crc kubenswrapper[4721]: E0202 13:02:11.942030 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:02:11 crc kubenswrapper[4721]: E0202 13:02:11.942150 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:02:43.942125986 +0000 UTC m=+104.244640405 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.032174 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.032232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.032249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.032272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.032287 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.135239 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.135281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.135291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.135309 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.135354 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.237580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.237616 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.237624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.237636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.237646 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.340248 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.340306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.340317 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.340342 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.340355 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.401834 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:04:34.543556836 +0000 UTC Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.425469 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:12 crc kubenswrapper[4721]: E0202 13:02:12.425633 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.442913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.442936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.442943 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.442953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.442962 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.545836 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.545887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.545897 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.545912 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.545922 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.648684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.648740 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.648756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.648779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.648797 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.751212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.751260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.751269 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.751284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.751295 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.853206 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.853254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.853265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.853281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.853290 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.955939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.955977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.955986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.956000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:12 crc kubenswrapper[4721]: I0202 13:02:12.956011 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:12Z","lastTransitionTime":"2026-02-02T13:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.058676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.058783 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.058806 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.058834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.058851 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.161907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.161969 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.161979 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.162000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.162020 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.264671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.264721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.264736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.264755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.264769 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.366879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.366922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.366932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.366946 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.366956 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.402769 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:58:58.345738664 +0000 UTC Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.409245 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.409261 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:13 crc kubenswrapper[4721]: E0202 13:02:13.409478 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.409268 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:13 crc kubenswrapper[4721]: E0202 13:02:13.409606 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:13 crc kubenswrapper[4721]: E0202 13:02:13.409676 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.469514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.469561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.469576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.469594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.469607 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.572125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.572197 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.572209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.572228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.572241 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.675204 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.675252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.675264 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.675281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.675292 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.778157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.778448 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.778466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.778485 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.778498 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.881055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.881119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.881129 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.881145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.881156 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.885118 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/0.log" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.885226 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" containerID="0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6" exitCode=1 Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.885272 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerDied","Data":"0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.885966 4721 scope.go:117] "RemoveContainer" containerID="0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.898194 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.915355 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.939203 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.956295 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.975035 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.983556 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.983631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.983642 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.983667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.983681 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:13Z","lastTransitionTime":"2026-02-02T13:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:13 crc kubenswrapper[4721]: I0202 13:02:13.994835 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:13Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.017030 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.032772 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.048200 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.061750 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.082707 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.086569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.086620 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.086629 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.086648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.086658 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.095537 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.110627 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.122820 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.135440 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.149222 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.169043 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.184830 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.189090 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.189135 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.189148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.189171 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.189184 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.291428 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.291462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.291471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.291485 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.291494 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.394287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.394328 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.394339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.394357 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.394370 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.403859 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:16:00.081658428 +0000 UTC Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.409187 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:14 crc kubenswrapper[4721]: E0202 13:02:14.409330 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.496725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.496755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.496765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.496783 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.496796 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.598971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.599002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.599012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.599025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.599034 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.701639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.701670 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.701678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.701692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.701700 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.804313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.804366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.804380 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.804402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.804417 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.891569 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/0.log" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.891634 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerStarted","Data":"3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.908915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.908955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.908964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.908982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.908993 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:14Z","lastTransitionTime":"2026-02-02T13:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.909915 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.927234 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.945099 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.960378 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:14 crc kubenswrapper[4721]: I0202 13:02:14.976243 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:14Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.012467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.012769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.012789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.012811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.012829 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.013032 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.031797 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.050518 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.066054 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.083602 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.099613 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.114036 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.115765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.115796 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.115807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.115824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.115838 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.129015 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.146970 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.159862 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.171810 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.187632 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.211324 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:15Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.219447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.219499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.219523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.219553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.219574 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.322571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.322605 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.322617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.322631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.322643 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.404817 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:32:21.501849207 +0000 UTC Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.409300 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:15 crc kubenswrapper[4721]: E0202 13:02:15.409464 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.409550 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:15 crc kubenswrapper[4721]: E0202 13:02:15.409619 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.409672 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:15 crc kubenswrapper[4721]: E0202 13:02:15.409727 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.425530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.425570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.425586 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.425608 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.425624 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.529016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.529114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.529137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.529164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.529187 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.631875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.631934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.631948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.631968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.631984 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.735473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.735563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.735587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.735613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.735645 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.839000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.839164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.839182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.839242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.839270 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.942790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.942834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.942850 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.942870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:15 crc kubenswrapper[4721]: I0202 13:02:15.942885 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:15Z","lastTransitionTime":"2026-02-02T13:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.045581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.045631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.045643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.045658 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.045670 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.150394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.150464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.150482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.150505 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.150523 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.254254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.254331 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.254353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.254383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.254408 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.357215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.357274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.357289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.357309 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.357324 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.405764 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:34:27.214053924 +0000 UTC Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.409264 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.409492 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.460193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.460274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.460298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.460332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.460359 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.563729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.563797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.563819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.563847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.563867 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.665964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.666049 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.666131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.666167 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.666197 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.768667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.768727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.768741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.768758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.768771 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.846827 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.846887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.846900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.846924 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.846948 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.867493 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.872618 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.872662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.872675 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.872693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.872705 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.890326 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.894628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.894672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.894684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.894701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.894712 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.916484 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.920270 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.920330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.920341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.920359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.920372 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.933827 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.937415 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.937450 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.937461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.937477 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.937489 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.951248 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a135069f-e3e0-400d-820a-6a71848b843d\\\",\\\"systemUUID\\\":\\\"a18387c5-ff07-4cdd-8a5b-70ab978f8648\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:16 crc kubenswrapper[4721]: E0202 13:02:16.951728 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.953687 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.953720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.953730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.953748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:16 crc kubenswrapper[4721]: I0202 13:02:16.953759 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:16Z","lastTransitionTime":"2026-02-02T13:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.056343 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.056431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.056445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.056461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.056473 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.158607 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.158665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.158678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.158699 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.158712 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.260591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.260634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.260648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.260666 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.260681 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.363226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.363287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.363312 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.363333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.363348 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.406928 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:55:37.409686528 +0000 UTC Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.409303 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.409345 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.409420 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:17 crc kubenswrapper[4721]: E0202 13:02:17.409464 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:17 crc kubenswrapper[4721]: E0202 13:02:17.409576 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:17 crc kubenswrapper[4721]: E0202 13:02:17.409668 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.466601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.466671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.466693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.466723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.466744 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.569902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.569943 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.569953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.569971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.569983 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.672573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.672607 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.672615 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.672628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.672637 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.775944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.776011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.776031 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.776057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.776105 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.879919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.880323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.880523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.880711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.880888 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.983024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.983132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.983158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.983189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:17 crc kubenswrapper[4721]: I0202 13:02:17.983212 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:17Z","lastTransitionTime":"2026-02-02T13:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.086127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.086192 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.086213 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.086237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.086254 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.188816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.188879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.188891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.188905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.188914 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.291409 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.291460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.291471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.291490 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.291501 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.394855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.394896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.394907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.394928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.394945 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.407145 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:52:20.436345537 +0000 UTC Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.409584 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:18 crc kubenswrapper[4721]: E0202 13:02:18.409788 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.497624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.497684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.497701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.497725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.497742 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.600871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.600963 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.601004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.601041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.601163 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.704720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.704832 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.704853 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.704881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.704901 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.807805 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.807877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.807896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.807922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.807942 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.911005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.911140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.911161 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.911189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:18 crc kubenswrapper[4721]: I0202 13:02:18.911206 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:18Z","lastTransitionTime":"2026-02-02T13:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.014805 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.014873 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.014889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.014912 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.014930 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.118250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.118319 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.118336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.118361 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.118381 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.221910 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.222373 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.222566 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.222813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.223473 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.327060 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.327481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.327686 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.328574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.328753 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.408010 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:27:37.926099986 +0000 UTC Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.409375 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.409391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.409478 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:19 crc kubenswrapper[4721]: E0202 13:02:19.409797 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:19 crc kubenswrapper[4721]: E0202 13:02:19.410224 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:19 crc kubenswrapper[4721]: E0202 13:02:19.410313 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.411375 4721 scope.go:117] "RemoveContainer" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.432269 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.432341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.432360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.432389 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.432409 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.536131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.536203 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.536219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.536241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.536261 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.638766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.638799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.638836 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.638853 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.638970 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.742397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.742442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.742454 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.742475 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.742494 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.845579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.845613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.845628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.845643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.845655 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.917421 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/2.log" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.919953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.920405 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.933242 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.944343 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.949510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.949572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.949590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.949609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.949628 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:19Z","lastTransitionTime":"2026-02-02T13:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.957918 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.969959 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.982898 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:19 crc kubenswrapper[4721]: I0202 13:02:19.994830 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.019685 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.040572 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052167 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052238 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.052855 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.064940 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.077041 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.089745 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.102640 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.116157 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.135756 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.148350 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.154380 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.154513 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.154589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.154650 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.154711 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.162638 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.174720 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.257859 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.257913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.257928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.257949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.257964 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.360680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.360716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.360725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.360739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.360750 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.408685 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:52:54.190244514 +0000 UTC Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.408726 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:20 crc kubenswrapper[4721]: E0202 13:02:20.408945 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.434101 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.451145 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.467449 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.467533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.467543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.467567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.467577 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.468389 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.488684 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.506265 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.524563 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.545575 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.563913 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.570672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.570723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.570735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.570749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.570760 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.588678 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.605350 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.623750 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.640849 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.654096 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.668313 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.673026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.673089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.673106 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.673125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.673138 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.691337 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.712617 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.725935 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.740286 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.776256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.776316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.776329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.776345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.776355 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.879180 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.879570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.879587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.879607 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.879620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.925908 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/3.log" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.926666 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/2.log" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.930544 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" exitCode=1 Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.930604 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.930670 4721 scope.go:117] "RemoveContainer" containerID="6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.931836 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:02:20 crc kubenswrapper[4721]: E0202 13:02:20.932174 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.948383 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.959350 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.972084 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.982405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.982446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.982459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.982477 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.982491 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:20Z","lastTransitionTime":"2026-02-02T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:20 crc kubenswrapper[4721]: I0202 13:02:20.991729 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.010428 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.025923 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.041015 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.056637 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.072037 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.085133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.085183 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.085196 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.085214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.085227 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.086720 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.100167 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.131442 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a1a04dffbd006aaa015d90a67b9f605aea78ad5e0c0dd17cff95375c2bc73f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:01:52Z\\\",\\\"message\\\":\\\"map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 13:01:52.319818 6399 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-48kgl\\\\nI0202 13:01:52.319835 6399 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 13:01:52.319869 6399 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:20Z\\\",\\\"message\\\":\\\"former during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:02:20.256797 6792 services_controller.go:443] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodeP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.144666 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.160757 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.172335 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.184144 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.187468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.187541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.187566 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.188030 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.188370 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.195207 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.208221 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.292393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.292462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.292481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.292960 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.293014 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.395468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.395523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.395540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.395564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.395581 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.409026 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.409127 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.409115 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:03:31.163576141 +0000 UTC Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.409032 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:21 crc kubenswrapper[4721]: E0202 13:02:21.409262 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:21 crc kubenswrapper[4721]: E0202 13:02:21.409352 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:21 crc kubenswrapper[4721]: E0202 13:02:21.409437 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.498497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.498553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.498572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.498601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.498637 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.601321 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.601417 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.601457 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.601488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.601512 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.704776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.704828 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.704839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.704856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.704871 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.808227 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.808279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.808295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.808318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.808334 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.911513 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.911554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.911564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.911580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.911593 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:21Z","lastTransitionTime":"2026-02-02T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.936276 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/3.log" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.940254 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:02:21 crc kubenswrapper[4721]: E0202 13:02:21.940384 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.960340 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adb26aa4f183a30ed2171ec82ffe1dd18441edca9ef1c82be5de119b9f5dfeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea451bfae69b1e106c048c055efa12ca4703ed50c5a896c96249667bc894be68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.976651 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186eb66a27288d345e91e727cc75eedd075c4b2c53f34b84f11c781a35013221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:21 crc kubenswrapper[4721]: I0202 13:02:21.991677 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-48kgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b4ea41-ceb5-481a-899e-c2876ced6d49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf615344c715b3b5e29df415e722aa0a79e0f5920a112b9dc5fb2e59edb8e02a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p6j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-48kgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.005620 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfc8a9e-993f-494a-ba91-4132345cee05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e9bc6a3dd0746c09c22e858873a85ac0fd67fe903dfa6f2cd5061020ce5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c7004e0fbbf58f24a8539f9e323fdba3cbe648aff6b6c7be443d765260a148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js5c8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sz7tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.014438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.014465 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.014473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.014486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.014496 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.018424 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.031317 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.042311 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sgp8m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13f3ec54-a7fb-4236-9583-827d960b2086\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79b84f660c3d88abf5333666a0c58660017ec7a5a73236fdbec2a66d3581e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n42qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sgp8m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.054141 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqz79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfab3ffb-8798-423d-9b55-83868b76a14e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbwws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqz79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.075443 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ecae132-72a0-444f-a8a0-6b66ac020b8a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f90327e648588b02fa6cbf66e4669d08eca0fec9e983eba6b51b1b6255a974fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6636c906b1c4974addbe6737ada10dc7b1e7e1d26030bc353b26e6ea9cbf79c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ad0c06550cf5333819bbc66fe88e308352e5b3731fb7589908cf0513a1779e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a0afebb867064e3383aa667730cb4ee786878a48cea87c6537dc4da6e782aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1f0c247d2735f3222f768de49245a1b86e12d508d1ac4f470cef381ababf90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6c33bc8b83864e16f12e0fdaccef704118b6bbc2dfec4dc4c6c14130988ef4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c04d33bb133ee7e08e972bd10e814b5407f058a499d339d9b022d64712d02b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8c39f3a470d594d6a1c9bcd759c4fee9704d429712dca6f11c37e40c9863f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.093513 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0939893-cc01-45bf-844d-77d599d4d0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 13:01:03.744523 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 13:01:03.746892 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-647403325/tls.crt::/tmp/serving-cert-647403325/tls.key\\\\\\\"\\\\nI0202 13:01:19.092031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 13:01:19.097108 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 13:01:19.097147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 13:01:19.097179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 13:01:19.097190 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 13:01:19.110409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 13:01:19.110441 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 13:01:19.110451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 13:01:19.110454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 13:01:19.110464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 13:01:19.110471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 13:01:19.110474 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 13:01:19.113447 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.110038 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22919e36d401d7d15a12233a844fc73fadb81e6729ee5bf13075363ed0fb6443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.116572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.116605 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.116616 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.116631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.116643 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.124587 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.142052 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ltw7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:13Z\\\",\\\"message\\\":\\\"2026-02-02T13:01:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f\\\\n2026-02-02T13:01:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7c44ae0c-2ffe-42d2-9e4e-a4f37c32136f to /host/opt/cni/bin/\\\\n2026-02-02T13:01:28Z [verbose] multus-daemon started\\\\n2026-02-02T13:01:28Z [verbose] Readiness Indicator file check\\\\n2026-02-02T13:02:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dchqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ltw7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.160434 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b1629d-0184-4975-8d4b-7a32913e7389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ef571a7e891a1eb9aa9c8d0c3590e57c27aed2b369226612798dfa8e9fff15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://300219467a5463b61c541d43a8a1e0a52832275c0ea9f25365ae171e7a9917d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be4dc7e5fc10c66010fefe026d707139d3f367e23fe6e3802af466e9be1cadb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5db5fa2480ff66ab3f6dedf8ff0657f749ae860659c7efb40c49b03e39fe8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beb6ee1be2b65f10751a3a75dae3c008fbb7a1a2a686c16b579f1abcce745c89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2716ccd45d9db0c7f951189bed607322fc0a59641212923e2b71ffac2ba17868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d102ac76b81834338d5502d360272956cd2364b424a32a6d90a5e9bfedeeb2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2tcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4lhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.180272 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9428a242-fbf8-46a8-8527-4f85b5dc560b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c27f73cad17b65a844c92ff27f1bb25dc91ee023461161b0418a23912bd186c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374718be2e182a95c339dcbf987fdaa9a52ffc00f7c892bab39c27807b2e825a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f34214c3fbc902846b91b8e887ee4332538456bb0caf567fbb85636791af1b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.195874 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45de494a1ced27830ad6e4acb334966f48c89aab01696558e481b174c6362e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6j6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rppjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.218879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.218920 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.218933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.218950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.218963 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.230007 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T13:02:20Z\\\",\\\"message\\\":\\\"former during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:20Z is after 2025-08-24T17:21:41Z]\\\\nI0202 13:02:20.256797 6792 services_controller.go:443] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8383, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.53\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8081, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodeP\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T13:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88f6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwcs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.244234 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e7b999-d47e-409d-bafe-ffde8c03995e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T13:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b10d453f884ce1095dd9bb0f8d91ebd466c4e2771a3a425abbe3001084cb09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1bc82fcd0af4a3c1e3508fe525c95017c03f521388a28e7eaad2bb5dbb0d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8797e4a71975106b9aa6c6519ef0d64047df3b21e3164f586b1441f5897e0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T13:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c9b4927e7df11ad15f3436346ceef587ef7e5e8716d26bd2e9b2000f9bc16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T13:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T13:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T13:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.321445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.321495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.321506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.321523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.321535 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.409430 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:56:51.399459759 +0000 UTC Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.409646 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:22 crc kubenswrapper[4721]: E0202 13:02:22.409843 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.423633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.423684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.423701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.423723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.423741 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.526905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.526953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.526968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.526989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.527005 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.630459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.630537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.630561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.630592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.630612 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.733843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.733904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.733922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.733946 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.733961 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.836841 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.836884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.836895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.836913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.836926 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.940842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.940900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.940923 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.940952 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:22 crc kubenswrapper[4721]: I0202 13:02:22.940975 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:22Z","lastTransitionTime":"2026-02-02T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.044426 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.044491 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.044508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.044533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.044553 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.052251 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.052409 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:27.052378078 +0000 UTC m=+147.354892507 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.147749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.147843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.147865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.147896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.147918 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.154559 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.154769 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.154920 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:27.154884669 +0000 UTC m=+147.457399058 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.155005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.155134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.155190 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155331 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155399 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155451 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155475 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155530 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:27.155512629 +0000 UTC m=+147.458027038 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155404 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155613 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155651 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:27.155642442 +0000 UTC m=+147.458156831 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155347 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.155690 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 13:03:27.155681714 +0000 UTC m=+147.458196103 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.251307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.251375 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.251392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.251418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.251436 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.354019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.354139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.354178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.354208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.354229 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.409293 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.409361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.409391 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.409462 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.409507 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:23 crc kubenswrapper[4721]: E0202 13:02:23.409590 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.409613 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:45:22.241663052 +0000 UTC Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.456771 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.456843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.456863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.456890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.457606 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.561219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.561266 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.561286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.561307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.561325 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.664540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.664574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.664582 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.664595 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.664605 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.767212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.767262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.767273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.767290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.767301 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.869604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.869676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.869688 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.869706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.869720 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.973509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.973546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.973556 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.973572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:23 crc kubenswrapper[4721]: I0202 13:02:23.973583 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:23Z","lastTransitionTime":"2026-02-02T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.076019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.076057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.076086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.076101 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.076115 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.179175 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.179221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.179235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.179253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.179264 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.282445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.282512 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.282535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.282566 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.282588 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.385445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.385521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.385546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.385576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.385599 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.409283 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:24 crc kubenswrapper[4721]: E0202 13:02:24.409481 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.409755 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:09:51.679636108 +0000 UTC Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.487600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.487647 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.487656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.487672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.487681 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.590637 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.590703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.590720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.590745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.590761 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.693933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.694007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.694029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.694056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.694117 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.797366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.797428 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.797445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.797468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.797486 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.900665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.900713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.900729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.900750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:24 crc kubenswrapper[4721]: I0202 13:02:24.900767 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:24Z","lastTransitionTime":"2026-02-02T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.003584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.003663 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.003688 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.003718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.003740 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.106506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.106583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.106609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.106744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.106765 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.210677 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.210738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.210755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.210780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.210798 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.313578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.313649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.313671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.313701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.313724 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.409583 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:25 crc kubenswrapper[4721]: E0202 13:02:25.409767 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.409847 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.409913 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:25 crc kubenswrapper[4721]: E0202 13:02:25.409988 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.409982 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:06:21.215265486 +0000 UTC Feb 02 13:02:25 crc kubenswrapper[4721]: E0202 13:02:25.410219 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.416803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.416865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.416889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.416918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.416945 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.519814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.519896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.519936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.519970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.519992 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.623221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.623291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.623313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.623337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.623355 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.726454 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.726531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.726552 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.726576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.726594 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.830551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.830632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.830652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.830678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.830695 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.933725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.933773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.933792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.933814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:25 crc kubenswrapper[4721]: I0202 13:02:25.933831 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:25Z","lastTransitionTime":"2026-02-02T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.036945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.037000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.037018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.037042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.037064 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.139903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.139962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.139980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.140002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.140021 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.243488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.243561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.243575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.243591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.243625 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.347719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.347870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.347908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.347940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.347965 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.408904 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:26 crc kubenswrapper[4721]: E0202 13:02:26.409112 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.411154 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:30:01.517845089 +0000 UTC Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.451400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.451438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.451448 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.451464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.451478 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.555180 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.555233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.555244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.555261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.555274 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.659022 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.659143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.659164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.659189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.659208 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.762456 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.762527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.762549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.762579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.762602 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.865400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.865463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.865479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.865501 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.865518 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.968253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.968292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.968301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.968313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:26 crc kubenswrapper[4721]: I0202 13:02:26.968322 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:26Z","lastTransitionTime":"2026-02-02T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.071153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.071226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.071252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.071287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.071309 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:27Z","lastTransitionTime":"2026-02-02T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.177646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.177719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.177737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.177763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.177781 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:27Z","lastTransitionTime":"2026-02-02T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.230331 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.230405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.230424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.230451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.230469 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T13:02:27Z","lastTransitionTime":"2026-02-02T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.314433 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt"] Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.314984 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.320844 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.320966 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.321400 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.321777 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.401061 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.401122 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a5f382b-2706-4d62-99b4-04ee8284bae5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.401220 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.401278 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5f382b-2706-4d62-99b4-04ee8284bae5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.401357 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a5f382b-2706-4d62-99b4-04ee8284bae5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.402127 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.402113274 podStartE2EDuration="1m8.402113274s" podCreationTimestamp="2026-02-02 13:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.386612134 +0000 UTC m=+87.689126533" watchObservedRunningTime="2026-02-02 13:02:27.402113274 +0000 UTC m=+87.704627663" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.409355 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:27 crc kubenswrapper[4721]: E0202 13:02:27.409464 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.409663 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:27 crc kubenswrapper[4721]: E0202 13:02:27.409748 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.409885 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:27 crc kubenswrapper[4721]: E0202 13:02:27.409947 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.411342 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:18:21.502504074 +0000 UTC Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.411396 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.418309 4721 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.428991 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ltw7d" podStartSLOduration=62.42896776 podStartE2EDuration="1m2.42896776s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.428928589 +0000 UTC m=+87.731442988" watchObservedRunningTime="2026-02-02 13:02:27.42896776 +0000 UTC m=+87.731482159" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.446700 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x4lhg" podStartSLOduration=62.446679266 podStartE2EDuration="1m2.446679266s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.446540602 +0000 UTC m=+87.749055001" watchObservedRunningTime="2026-02-02 13:02:27.446679266 +0000 UTC m=+87.749193655" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.459685 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.459660141 podStartE2EDuration="34.459660141s" podCreationTimestamp="2026-02-02 13:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.459232789 +0000 UTC m=+87.761747188" watchObservedRunningTime="2026-02-02 13:02:27.459660141 +0000 UTC m=+87.762174550" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.473836 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=61.473820331 podStartE2EDuration="1m1.473820331s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.473047879 +0000 UTC m=+87.775562288" watchObservedRunningTime="2026-02-02 13:02:27.473820331 +0000 UTC m=+87.776334730" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521637 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521684 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a5f382b-2706-4d62-99b4-04ee8284bae5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521713 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521737 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5f382b-2706-4d62-99b4-04ee8284bae5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521759 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a5f382b-2706-4d62-99b4-04ee8284bae5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521752 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521809 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podStartSLOduration=62.521792464 podStartE2EDuration="1m2.521792464s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.487879358 +0000 UTC m=+87.790393767" watchObservedRunningTime="2026-02-02 13:02:27.521792464 +0000 UTC m=+87.824306853" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.521885 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a5f382b-2706-4d62-99b4-04ee8284bae5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.522646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a5f382b-2706-4d62-99b4-04ee8284bae5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.533621 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5f382b-2706-4d62-99b4-04ee8284bae5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.546130 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a5f382b-2706-4d62-99b4-04ee8284bae5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2bpt\" (UID: \"7a5f382b-2706-4d62-99b4-04ee8284bae5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.582452 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-48kgl" podStartSLOduration=62.582432934 podStartE2EDuration="1m2.582432934s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.582394963 +0000 UTC m=+87.884909362" watchObservedRunningTime="2026-02-02 13:02:27.582432934 +0000 UTC m=+87.884947323" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.596895 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sz7tc" podStartSLOduration=61.596878553 podStartE2EDuration="1m1.596878553s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.596361418 +0000 UTC m=+87.898875817" watchObservedRunningTime="2026-02-02 13:02:27.596878553 +0000 UTC m=+87.899392942" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.639535 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=64.639510667 podStartE2EDuration="1m4.639510667s" podCreationTimestamp="2026-02-02 13:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.637525639 +0000 UTC m=+87.940040028" watchObservedRunningTime="2026-02-02 13:02:27.639510667 +0000 UTC m=+87.942025076" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.644021 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.679808 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sgp8m" podStartSLOduration=63.679787632 podStartE2EDuration="1m3.679787632s" podCreationTimestamp="2026-02-02 13:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.679244987 +0000 UTC m=+87.981759366" watchObservedRunningTime="2026-02-02 13:02:27.679787632 +0000 UTC m=+87.982302021" Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.965218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" event={"ID":"7a5f382b-2706-4d62-99b4-04ee8284bae5","Type":"ContainerStarted","Data":"223e6da1547777d9020eeb86f08529f9ca9315c783ee8f377db090fd9a040e38"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.965680 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" event={"ID":"7a5f382b-2706-4d62-99b4-04ee8284bae5","Type":"ContainerStarted","Data":"21abe34615dab7183cfd84b4bccfa2d815de7b5e3fecd873a79ac6ca0ab37845"} Feb 02 13:02:27 crc kubenswrapper[4721]: I0202 13:02:27.990338 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2bpt" podStartSLOduration=62.990315557 podStartE2EDuration="1m2.990315557s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:27.988638907 +0000 UTC m=+88.291153326" watchObservedRunningTime="2026-02-02 13:02:27.990315557 +0000 UTC m=+88.292829976" Feb 02 13:02:28 crc kubenswrapper[4721]: I0202 13:02:28.408683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:28 crc kubenswrapper[4721]: E0202 13:02:28.408800 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:29 crc kubenswrapper[4721]: I0202 13:02:29.408683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:29 crc kubenswrapper[4721]: I0202 13:02:29.408778 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:29 crc kubenswrapper[4721]: I0202 13:02:29.408683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:29 crc kubenswrapper[4721]: E0202 13:02:29.408819 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:29 crc kubenswrapper[4721]: E0202 13:02:29.408922 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:29 crc kubenswrapper[4721]: E0202 13:02:29.409025 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:30 crc kubenswrapper[4721]: I0202 13:02:30.408713 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:30 crc kubenswrapper[4721]: E0202 13:02:30.409812 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:31 crc kubenswrapper[4721]: I0202 13:02:31.408549 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:31 crc kubenswrapper[4721]: I0202 13:02:31.408636 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:31 crc kubenswrapper[4721]: E0202 13:02:31.408715 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:31 crc kubenswrapper[4721]: I0202 13:02:31.408740 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:31 crc kubenswrapper[4721]: E0202 13:02:31.408841 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:31 crc kubenswrapper[4721]: E0202 13:02:31.408994 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:32 crc kubenswrapper[4721]: I0202 13:02:32.409247 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:32 crc kubenswrapper[4721]: E0202 13:02:32.409588 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:32 crc kubenswrapper[4721]: I0202 13:02:32.421302 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 13:02:33 crc kubenswrapper[4721]: I0202 13:02:33.408673 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:33 crc kubenswrapper[4721]: I0202 13:02:33.408711 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:33 crc kubenswrapper[4721]: E0202 13:02:33.408836 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:33 crc kubenswrapper[4721]: I0202 13:02:33.408926 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:33 crc kubenswrapper[4721]: E0202 13:02:33.409156 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:33 crc kubenswrapper[4721]: E0202 13:02:33.409232 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:34 crc kubenswrapper[4721]: I0202 13:02:34.409702 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:34 crc kubenswrapper[4721]: E0202 13:02:34.409944 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:35 crc kubenswrapper[4721]: I0202 13:02:35.409365 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:35 crc kubenswrapper[4721]: I0202 13:02:35.409447 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:35 crc kubenswrapper[4721]: I0202 13:02:35.409383 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:35 crc kubenswrapper[4721]: E0202 13:02:35.409536 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:35 crc kubenswrapper[4721]: E0202 13:02:35.409691 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:35 crc kubenswrapper[4721]: E0202 13:02:35.409813 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:36 crc kubenswrapper[4721]: I0202 13:02:36.408785 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:36 crc kubenswrapper[4721]: E0202 13:02:36.409321 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:37 crc kubenswrapper[4721]: I0202 13:02:37.409410 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:37 crc kubenswrapper[4721]: I0202 13:02:37.409448 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:37 crc kubenswrapper[4721]: E0202 13:02:37.409620 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:37 crc kubenswrapper[4721]: E0202 13:02:37.409738 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:37 crc kubenswrapper[4721]: I0202 13:02:37.410325 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:37 crc kubenswrapper[4721]: E0202 13:02:37.410480 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:37 crc kubenswrapper[4721]: I0202 13:02:37.410556 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:02:37 crc kubenswrapper[4721]: E0202 13:02:37.410729 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:02:38 crc kubenswrapper[4721]: I0202 13:02:38.409404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:38 crc kubenswrapper[4721]: E0202 13:02:38.409669 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:39 crc kubenswrapper[4721]: I0202 13:02:39.409252 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:39 crc kubenswrapper[4721]: I0202 13:02:39.409293 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:39 crc kubenswrapper[4721]: I0202 13:02:39.409257 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:39 crc kubenswrapper[4721]: E0202 13:02:39.409419 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:39 crc kubenswrapper[4721]: E0202 13:02:39.409548 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:39 crc kubenswrapper[4721]: E0202 13:02:39.409808 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:40 crc kubenswrapper[4721]: I0202 13:02:40.411291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:40 crc kubenswrapper[4721]: E0202 13:02:40.411453 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:40 crc kubenswrapper[4721]: I0202 13:02:40.428789 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=8.428773328 podStartE2EDuration="8.428773328s" podCreationTimestamp="2026-02-02 13:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:02:40.428458088 +0000 UTC m=+100.730972477" watchObservedRunningTime="2026-02-02 13:02:40.428773328 +0000 UTC m=+100.731287717" Feb 02 13:02:41 crc kubenswrapper[4721]: I0202 13:02:41.408876 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:41 crc kubenswrapper[4721]: I0202 13:02:41.408931 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:41 crc kubenswrapper[4721]: E0202 13:02:41.409048 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:41 crc kubenswrapper[4721]: I0202 13:02:41.408881 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:41 crc kubenswrapper[4721]: E0202 13:02:41.409259 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:41 crc kubenswrapper[4721]: E0202 13:02:41.409274 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:42 crc kubenswrapper[4721]: I0202 13:02:42.409351 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:42 crc kubenswrapper[4721]: E0202 13:02:42.410426 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:43 crc kubenswrapper[4721]: I0202 13:02:43.408630 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:43 crc kubenswrapper[4721]: I0202 13:02:43.408639 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:43 crc kubenswrapper[4721]: I0202 13:02:43.408743 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:43 crc kubenswrapper[4721]: E0202 13:02:43.408841 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:43 crc kubenswrapper[4721]: E0202 13:02:43.408908 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:43 crc kubenswrapper[4721]: E0202 13:02:43.409150 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:44 crc kubenswrapper[4721]: I0202 13:02:44.028053 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:44 crc kubenswrapper[4721]: E0202 13:02:44.028325 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:02:44 crc kubenswrapper[4721]: E0202 13:02:44.028398 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs podName:bfab3ffb-8798-423d-9b55-83868b76a14e nodeName:}" failed. No retries permitted until 2026-02-02 13:03:48.028375415 +0000 UTC m=+168.330889844 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs") pod "network-metrics-daemon-xqz79" (UID: "bfab3ffb-8798-423d-9b55-83868b76a14e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 13:02:44 crc kubenswrapper[4721]: I0202 13:02:44.409112 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:44 crc kubenswrapper[4721]: E0202 13:02:44.409339 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:45 crc kubenswrapper[4721]: I0202 13:02:45.409045 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:45 crc kubenswrapper[4721]: I0202 13:02:45.409234 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:45 crc kubenswrapper[4721]: E0202 13:02:45.409294 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:45 crc kubenswrapper[4721]: I0202 13:02:45.409053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:45 crc kubenswrapper[4721]: E0202 13:02:45.409482 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:45 crc kubenswrapper[4721]: E0202 13:02:45.409707 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:46 crc kubenswrapper[4721]: I0202 13:02:46.409264 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:46 crc kubenswrapper[4721]: E0202 13:02:46.409593 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:47 crc kubenswrapper[4721]: I0202 13:02:47.409281 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:47 crc kubenswrapper[4721]: I0202 13:02:47.409334 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:47 crc kubenswrapper[4721]: E0202 13:02:47.409441 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:47 crc kubenswrapper[4721]: I0202 13:02:47.409455 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:47 crc kubenswrapper[4721]: E0202 13:02:47.409560 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:47 crc kubenswrapper[4721]: E0202 13:02:47.409743 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:48 crc kubenswrapper[4721]: I0202 13:02:48.409475 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:48 crc kubenswrapper[4721]: E0202 13:02:48.409695 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:49 crc kubenswrapper[4721]: I0202 13:02:49.409374 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:49 crc kubenswrapper[4721]: I0202 13:02:49.409428 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:49 crc kubenswrapper[4721]: I0202 13:02:49.409440 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:49 crc kubenswrapper[4721]: E0202 13:02:49.409972 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:49 crc kubenswrapper[4721]: E0202 13:02:49.409805 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:49 crc kubenswrapper[4721]: E0202 13:02:49.410083 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:50 crc kubenswrapper[4721]: I0202 13:02:50.409676 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:50 crc kubenswrapper[4721]: E0202 13:02:50.411818 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:51 crc kubenswrapper[4721]: I0202 13:02:51.409630 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:51 crc kubenswrapper[4721]: I0202 13:02:51.409703 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:51 crc kubenswrapper[4721]: I0202 13:02:51.409749 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:51 crc kubenswrapper[4721]: E0202 13:02:51.410432 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:51 crc kubenswrapper[4721]: E0202 13:02:51.410483 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:51 crc kubenswrapper[4721]: E0202 13:02:51.410513 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:51 crc kubenswrapper[4721]: I0202 13:02:51.411537 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:02:51 crc kubenswrapper[4721]: E0202 13:02:51.411820 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwcs2_openshift-ovn-kubernetes(b15bc48d-f88d-4b38-a9e1-00bb00b88a52)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" Feb 02 13:02:52 crc kubenswrapper[4721]: I0202 13:02:52.409062 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:52 crc kubenswrapper[4721]: E0202 13:02:52.409264 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:53 crc kubenswrapper[4721]: I0202 13:02:53.408667 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:53 crc kubenswrapper[4721]: I0202 13:02:53.408765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:53 crc kubenswrapper[4721]: I0202 13:02:53.408765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:53 crc kubenswrapper[4721]: E0202 13:02:53.409380 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:53 crc kubenswrapper[4721]: E0202 13:02:53.409515 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:53 crc kubenswrapper[4721]: E0202 13:02:53.409577 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:54 crc kubenswrapper[4721]: I0202 13:02:54.409723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:54 crc kubenswrapper[4721]: E0202 13:02:54.409961 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:55 crc kubenswrapper[4721]: I0202 13:02:55.409720 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:55 crc kubenswrapper[4721]: E0202 13:02:55.409843 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:55 crc kubenswrapper[4721]: I0202 13:02:55.410037 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:55 crc kubenswrapper[4721]: E0202 13:02:55.410132 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:55 crc kubenswrapper[4721]: I0202 13:02:55.410281 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:55 crc kubenswrapper[4721]: E0202 13:02:55.410339 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:56 crc kubenswrapper[4721]: I0202 13:02:56.409202 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:56 crc kubenswrapper[4721]: E0202 13:02:56.409423 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:57 crc kubenswrapper[4721]: I0202 13:02:57.409133 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:57 crc kubenswrapper[4721]: I0202 13:02:57.409143 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:57 crc kubenswrapper[4721]: I0202 13:02:57.409170 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:57 crc kubenswrapper[4721]: E0202 13:02:57.409497 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:57 crc kubenswrapper[4721]: E0202 13:02:57.409567 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:57 crc kubenswrapper[4721]: E0202 13:02:57.409279 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:02:58 crc kubenswrapper[4721]: I0202 13:02:58.409587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:02:58 crc kubenswrapper[4721]: E0202 13:02:58.409733 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:02:59 crc kubenswrapper[4721]: I0202 13:02:59.408877 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:02:59 crc kubenswrapper[4721]: I0202 13:02:59.408879 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:02:59 crc kubenswrapper[4721]: I0202 13:02:59.408920 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:02:59 crc kubenswrapper[4721]: E0202 13:02:59.409126 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:02:59 crc kubenswrapper[4721]: E0202 13:02:59.409256 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:02:59 crc kubenswrapper[4721]: E0202 13:02:59.409622 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.315963 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/1.log" Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.316784 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/0.log" Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.316867 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" containerID="3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569" exitCode=1 Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.316917 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerDied","Data":"3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569"} Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.316970 4721 scope.go:117] "RemoveContainer" containerID="0b928ff615c2bf8846d26d69c3c76c248795ed9f57be2d3e90b212ea75abeab6" Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.317587 4721 scope.go:117] "RemoveContainer" containerID="3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569" Feb 02 13:03:00 crc kubenswrapper[4721]: E0202 13:03:00.317911 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ltw7d_openshift-multus(5ba84858-caaa-4fba-8eaf-9f7ddece0b3a)\"" pod="openshift-multus/multus-ltw7d" podUID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" Feb 02 13:03:00 crc kubenswrapper[4721]: E0202 13:03:00.354371 4721 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 13:03:00 crc kubenswrapper[4721]: I0202 13:03:00.409551 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:00 crc kubenswrapper[4721]: E0202 13:03:00.410632 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:00 crc kubenswrapper[4721]: E0202 13:03:00.519396 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:03:01 crc kubenswrapper[4721]: I0202 13:03:01.322645 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/1.log" Feb 02 13:03:01 crc kubenswrapper[4721]: I0202 13:03:01.409603 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:01 crc kubenswrapper[4721]: I0202 13:03:01.409637 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:01 crc kubenswrapper[4721]: E0202 13:03:01.410258 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:01 crc kubenswrapper[4721]: I0202 13:03:01.409644 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:01 crc kubenswrapper[4721]: E0202 13:03:01.410332 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:01 crc kubenswrapper[4721]: E0202 13:03:01.410050 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:02 crc kubenswrapper[4721]: I0202 13:03:02.409254 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:02 crc kubenswrapper[4721]: E0202 13:03:02.409426 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:03 crc kubenswrapper[4721]: I0202 13:03:03.409136 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:03 crc kubenswrapper[4721]: I0202 13:03:03.409170 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:03 crc kubenswrapper[4721]: I0202 13:03:03.409134 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:03 crc kubenswrapper[4721]: E0202 13:03:03.409293 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:03 crc kubenswrapper[4721]: E0202 13:03:03.409477 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:03 crc kubenswrapper[4721]: E0202 13:03:03.409611 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:04 crc kubenswrapper[4721]: I0202 13:03:04.409279 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:04 crc kubenswrapper[4721]: E0202 13:03:04.409429 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:04 crc kubenswrapper[4721]: I0202 13:03:04.410556 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.292538 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqz79"] Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.292682 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:05 crc kubenswrapper[4721]: E0202 13:03:05.292860 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.337607 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/3.log" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.339927 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerStarted","Data":"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464"} Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.340375 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.367774 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podStartSLOduration=100.367757512 podStartE2EDuration="1m40.367757512s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:05.36500301 +0000 UTC m=+125.667517409" watchObservedRunningTime="2026-02-02 13:03:05.367757512 +0000 UTC m=+125.670271901" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.409591 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:05 crc kubenswrapper[4721]: I0202 13:03:05.409658 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:05 crc kubenswrapper[4721]: E0202 13:03:05.409751 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:05 crc kubenswrapper[4721]: E0202 13:03:05.409831 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:05 crc kubenswrapper[4721]: E0202 13:03:05.520269 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:03:06 crc kubenswrapper[4721]: I0202 13:03:06.409170 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:06 crc kubenswrapper[4721]: E0202 13:03:06.409440 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:07 crc kubenswrapper[4721]: I0202 13:03:07.409141 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:07 crc kubenswrapper[4721]: E0202 13:03:07.409279 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:07 crc kubenswrapper[4721]: I0202 13:03:07.409372 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:07 crc kubenswrapper[4721]: E0202 13:03:07.409431 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:07 crc kubenswrapper[4721]: I0202 13:03:07.409474 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:07 crc kubenswrapper[4721]: E0202 13:03:07.409521 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:08 crc kubenswrapper[4721]: I0202 13:03:08.409707 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:08 crc kubenswrapper[4721]: E0202 13:03:08.409857 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:09 crc kubenswrapper[4721]: I0202 13:03:09.408908 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:09 crc kubenswrapper[4721]: I0202 13:03:09.408972 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:09 crc kubenswrapper[4721]: E0202 13:03:09.409155 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:09 crc kubenswrapper[4721]: I0202 13:03:09.409250 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:09 crc kubenswrapper[4721]: E0202 13:03:09.409338 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:09 crc kubenswrapper[4721]: E0202 13:03:09.409532 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:10 crc kubenswrapper[4721]: I0202 13:03:10.409719 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:10 crc kubenswrapper[4721]: E0202 13:03:10.411468 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:10 crc kubenswrapper[4721]: E0202 13:03:10.521088 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:03:11 crc kubenswrapper[4721]: I0202 13:03:11.409604 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:11 crc kubenswrapper[4721]: I0202 13:03:11.409620 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:11 crc kubenswrapper[4721]: I0202 13:03:11.409620 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:11 crc kubenswrapper[4721]: E0202 13:03:11.410418 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:11 crc kubenswrapper[4721]: E0202 13:03:11.410563 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:11 crc kubenswrapper[4721]: E0202 13:03:11.410703 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:12 crc kubenswrapper[4721]: I0202 13:03:12.409128 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:12 crc kubenswrapper[4721]: E0202 13:03:12.409682 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:13 crc kubenswrapper[4721]: I0202 13:03:13.409261 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:13 crc kubenswrapper[4721]: I0202 13:03:13.409265 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:13 crc kubenswrapper[4721]: I0202 13:03:13.409281 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:13 crc kubenswrapper[4721]: E0202 13:03:13.410412 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:13 crc kubenswrapper[4721]: E0202 13:03:13.410554 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:13 crc kubenswrapper[4721]: E0202 13:03:13.410641 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:14 crc kubenswrapper[4721]: I0202 13:03:14.409680 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:14 crc kubenswrapper[4721]: E0202 13:03:14.410191 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:15 crc kubenswrapper[4721]: I0202 13:03:15.409514 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:15 crc kubenswrapper[4721]: I0202 13:03:15.409545 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:15 crc kubenswrapper[4721]: E0202 13:03:15.409668 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:15 crc kubenswrapper[4721]: I0202 13:03:15.409860 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:15 crc kubenswrapper[4721]: E0202 13:03:15.410037 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:15 crc kubenswrapper[4721]: E0202 13:03:15.410331 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:15 crc kubenswrapper[4721]: I0202 13:03:15.410484 4721 scope.go:117] "RemoveContainer" containerID="3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569" Feb 02 13:03:15 crc kubenswrapper[4721]: E0202 13:03:15.522441 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:03:16 crc kubenswrapper[4721]: I0202 13:03:16.386904 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/1.log" Feb 02 13:03:16 crc kubenswrapper[4721]: I0202 13:03:16.386975 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerStarted","Data":"c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d"} Feb 02 13:03:16 crc kubenswrapper[4721]: I0202 13:03:16.408896 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:16 crc kubenswrapper[4721]: E0202 13:03:16.409213 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:17 crc kubenswrapper[4721]: I0202 13:03:17.408715 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:17 crc kubenswrapper[4721]: I0202 13:03:17.408715 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:17 crc kubenswrapper[4721]: I0202 13:03:17.408715 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:17 crc kubenswrapper[4721]: E0202 13:03:17.410002 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:17 crc kubenswrapper[4721]: E0202 13:03:17.410153 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:17 crc kubenswrapper[4721]: E0202 13:03:17.410255 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:18 crc kubenswrapper[4721]: I0202 13:03:18.408970 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:18 crc kubenswrapper[4721]: E0202 13:03:18.409186 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:19 crc kubenswrapper[4721]: I0202 13:03:19.409140 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:19 crc kubenswrapper[4721]: I0202 13:03:19.409217 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:19 crc kubenswrapper[4721]: I0202 13:03:19.409148 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:19 crc kubenswrapper[4721]: E0202 13:03:19.409358 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 13:03:19 crc kubenswrapper[4721]: E0202 13:03:19.409482 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 13:03:19 crc kubenswrapper[4721]: E0202 13:03:19.409626 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqz79" podUID="bfab3ffb-8798-423d-9b55-83868b76a14e" Feb 02 13:03:20 crc kubenswrapper[4721]: I0202 13:03:20.409610 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:20 crc kubenswrapper[4721]: E0202 13:03:20.412145 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.409568 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.409613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.410547 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.412679 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.413260 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.416124 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 13:03:21 crc kubenswrapper[4721]: I0202 13:03:21.416174 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 13:03:22 crc kubenswrapper[4721]: I0202 13:03:22.408765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:22 crc kubenswrapper[4721]: I0202 13:03:22.412416 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 13:03:22 crc kubenswrapper[4721]: I0202 13:03:22.412460 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 13:03:22 crc kubenswrapper[4721]: I0202 13:03:22.503649 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.151626 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:27 crc kubenswrapper[4721]: E0202 13:03:27.151957 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:05:29.151914502 +0000 UTC m=+269.454428921 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.252532 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.252584 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.252608 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.252625 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.253567 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.258897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.258933 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.260000 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.432278 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.455269 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 13:03:27 crc kubenswrapper[4721]: I0202 13:03:27.528972 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 13:03:27 crc kubenswrapper[4721]: W0202 13:03:27.752642 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0ef6073105612947afbeaf4f24b98a3cbcdec6b1085e2cfb473b600890e39caa WatchSource:0}: Error finding container 0ef6073105612947afbeaf4f24b98a3cbcdec6b1085e2cfb473b600890e39caa: Status 404 returned error can't find the container with id 0ef6073105612947afbeaf4f24b98a3cbcdec6b1085e2cfb473b600890e39caa Feb 02 13:03:27 crc kubenswrapper[4721]: W0202 13:03:27.908307 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0f194111f9c52c06cbe2c2889a4f609b06af98f099a3cb6173c8e3be0cd624e5 WatchSource:0}: Error finding container 0f194111f9c52c06cbe2c2889a4f609b06af98f099a3cb6173c8e3be0cd624e5: Status 404 returned error can't find the container with id 0f194111f9c52c06cbe2c2889a4f609b06af98f099a3cb6173c8e3be0cd624e5 Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.433018 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4cf4cf9fd34c7d2ec2709b7f856ae03cc4966cf67baa38655e8de98108e5a88f"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.434316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0ef6073105612947afbeaf4f24b98a3cbcdec6b1085e2cfb473b600890e39caa"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.434737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"58e3c3d6723f3a79dcbbbc5d50377f655d347bc71e21de8db1163183dead9e70"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.434804 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1afbd55df17a654109cc7be12c213147ebbcdf7cefc4e21d97d802b8fa372095"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.436188 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1ce9b05059d8678d782d029736ee06c4951a9a0ae615275995003cdccfb94bcd"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.436242 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0f194111f9c52c06cbe2c2889a4f609b06af98f099a3cb6173c8e3be0cd624e5"} Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.436443 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.554860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.596726 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pnfph"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.597096 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kcw66"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.597490 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.597836 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.598893 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.599674 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.617286 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.617497 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.617859 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.617989 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.618146 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.618183 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.618824 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.619137 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrfhj"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.619036 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.619156 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.619203 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.619277 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.620182 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.620346 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.620516 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.620651 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.620567 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621219 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621386 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621480 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621625 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621236 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.621913 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.622655 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.623232 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.622845 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.624879 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.625250 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.628298 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.628998 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.629638 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.631526 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqbhq"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.632029 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.634920 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.635432 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f4l8v"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.635868 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.636303 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.636711 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.638503 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.642576 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.642863 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.643049 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.643268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.644770 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645090 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645214 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645331 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645710 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645834 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645923 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645986 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.645731 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646043 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646187 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646323 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646351 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646147 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646383 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646452 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646639 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646686 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646742 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.646919 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.649499 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.650129 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.650244 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.650510 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.650689 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.651909 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.652155 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.652334 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.652529 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.652651 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.653849 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zt9ng"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.654273 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.654582 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pnfph"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.654664 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.654952 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.656751 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.657268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.657630 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.658452 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.668975 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.669204 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.669317 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.669585 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.669958 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.670767 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.670893 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-encryption-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.670985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-node-pullsecrets\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671093 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-serving-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671191 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-client\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671267 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-serving-cert\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671343 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2p87\" (UniqueName: \"kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671421 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-image-import-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671509 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6dh\" (UniqueName: \"kubernetes.io/projected/083f0d8a-e0c4-46ae-8993-8547dd260553-kube-api-access-jw6dh\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671611 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671693 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.671831 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672095 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sbb\" (UniqueName: \"kubernetes.io/projected/64fc7a32-5852-4e03-b1b7-1663f7f52b65-kube-api-access-q6sbb\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672182 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672263 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672352 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672430 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-config\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672611 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-images\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672690 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/083f0d8a-e0c4-46ae-8993-8547dd260553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672766 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672849 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nff5\" (UniqueName: \"kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.672928 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit-dir\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.673055 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.675812 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.676819 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.688314 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.688509 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.688716 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.689251 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.689371 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.689468 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.689741 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.689908 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.690449 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.695090 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.699081 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jzch"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.699477 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zg529"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.700059 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.700482 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.690789 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.700523 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.693687 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.713097 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.713227 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.713251 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.713457 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.713094 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.715317 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.715501 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.716389 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7vhgv"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.716547 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.716806 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.733141 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.733440 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.733683 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.734225 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.734409 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.735307 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.735668 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.736390 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.736645 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.736814 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.737105 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.737545 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.741081 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.742587 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.775722 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.776669 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.776801 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.777635 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.778589 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.778703 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.778787 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.778967 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779255 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779294 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779261 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779406 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779477 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779478 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779418 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779633 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.779753 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.780782 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.781943 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.782348 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.782712 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.784704 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.802350 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.802889 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.802919 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.803482 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.803814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.803993 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.804510 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.806302 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.807620 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-csktx"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809092 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.806340 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809462 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809521 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk95h\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-kube-api-access-zk95h\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809548 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809575 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809606 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/202d08f0-f5ea-4414-b2e6-5a690148a823-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809634 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f028d7b1-e474-45f8-9c4e-d1b2322175c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809680 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-images\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/083f0d8a-e0c4-46ae-8993-8547dd260553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-trusted-ca\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809818 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809865 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qwv\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-kube-api-access-j7qwv\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809887 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809918 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nff5\" (UniqueName: \"kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809942 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit-dir\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809963 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d74e61-4d44-4a6c-8a14-16e131d79e47-metrics-tls\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.809985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f000c5-5173-44e3-89e6-446c345a6c05-machine-approver-tls\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810047 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810128 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810152 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5kv\" (UniqueName: \"kubernetes.io/projected/a1dde568-291e-40bf-9df7-18cd5449d0aa-kube-api-access-8f5kv\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810175 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-encryption-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810239 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810260 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810283 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f028d7b1-e474-45f8-9c4e-d1b2322175c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810311 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-node-pullsecrets\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810337 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-kube-api-access-9kth9\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810360 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810384 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-serving-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810407 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810432 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-client\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-serving-cert\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810482 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2p87\" (UniqueName: \"kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810504 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61358eab-20de-46bb-9701-dc736e6eb5ff-serving-cert\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810526 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810551 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-image-import-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810573 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810626 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw876\" (UniqueName: \"kubernetes.io/projected/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-kube-api-access-zw876\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810652 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810673 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3bbe9190-62bb-4079-afa7-adc9e970eae6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810694 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hn9\" (UniqueName: \"kubernetes.io/projected/61358eab-20de-46bb-9701-dc736e6eb5ff-kube-api-access-68hn9\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810715 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6dh\" (UniqueName: \"kubernetes.io/projected/083f0d8a-e0c4-46ae-8993-8547dd260553-kube-api-access-jw6dh\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810764 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zht5s\" (UniqueName: \"kubernetes.io/projected/67f000c5-5173-44e3-89e6-446c345a6c05-kube-api-access-zht5s\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-config\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810822 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810850 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810875 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810902 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810929 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810954 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.810977 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sbb\" (UniqueName: \"kubernetes.io/projected/64fc7a32-5852-4e03-b1b7-1663f7f52b65-kube-api-access-q6sbb\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-auth-proxy-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811030 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811055 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-serving-cert\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811093 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811115 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1dde568-291e-40bf-9df7-18cd5449d0aa-metrics-tls\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdv2\" (UniqueName: \"kubernetes.io/projected/3bbe9190-62bb-4079-afa7-adc9e970eae6-kube-api-access-vxdv2\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811155 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.808652 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811163 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bwbg\" (UniqueName: \"kubernetes.io/projected/f028d7b1-e474-45f8-9c4e-d1b2322175c7-kube-api-access-9bwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811464 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70d74e61-4d44-4a6c-8a14-16e131d79e47-trusted-ca\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811513 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9h96\" (UniqueName: \"kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811535 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811565 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811595 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l4n2\" (UniqueName: \"kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811615 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-config\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811635 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-config\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811708 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811727 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811747 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/202d08f0-f5ea-4414-b2e6-5a690148a823-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.811784 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt5vj\" (UniqueName: \"kubernetes.io/projected/af02a63f-5e62-47ff-baf5-1dc1e95dc1ad-kube-api-access-lt5vj\") pod \"downloads-7954f5f757-zt9ng\" (UID: \"af02a63f-5e62-47ff-baf5-1dc1e95dc1ad\") " pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.812132 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-serving-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.812145 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-images\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.813232 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.813595 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-image-import-ca\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.813652 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-node-pullsecrets\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.815749 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.816382 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.818789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/083f0d8a-e0c4-46ae-8993-8547dd260553-config\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.808620 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.819117 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r47km"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.819194 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.817680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit-dir\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.817662 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.819483 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.819860 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c68d5"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820107 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820259 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820337 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820468 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820437 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.818168 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820545 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-audit\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820411 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.820911 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.821028 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/083f0d8a-e0c4-46ae-8993-8547dd260553-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.821495 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.821964 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c958f"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.822496 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.822507 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.822875 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.822934 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.823102 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.823320 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.823704 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.824506 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrfhj"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.824619 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fc7a32-5852-4e03-b1b7-1663f7f52b65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.825567 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.827100 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zt9ng"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.827807 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-serving-cert\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.830639 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqbhq"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.830809 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.833828 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-etcd-client\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.833931 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kcw66"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.833977 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.835079 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.835764 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fc7a32-5852-4e03-b1b7-1663f7f52b65-encryption-config\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.839327 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f4l8v"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.839612 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.841455 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.842875 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.844191 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.846263 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kq22p"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.847124 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.847265 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.848960 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.851188 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-csktx"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.851804 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7gtg4"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.854315 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8t7x8"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.854555 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.854781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.856681 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kh9ph"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.857664 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.857857 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.859721 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.866632 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jzch"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.866672 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.871858 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.874325 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.874707 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.877639 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.881941 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.881979 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kh9ph"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.881992 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.884043 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.884210 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.892192 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zg529"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.893161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.898148 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kq22p"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.899326 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r47km"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.899828 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.904617 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.906903 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c68d5"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.910209 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914276 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914317 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914348 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5kv\" (UniqueName: \"kubernetes.io/projected/a1dde568-291e-40bf-9df7-18cd5449d0aa-kube-api-access-8f5kv\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914371 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914392 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f028d7b1-e474-45f8-9c4e-d1b2322175c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914449 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-kube-api-access-9kth9\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914469 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914490 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914520 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61358eab-20de-46bb-9701-dc736e6eb5ff-serving-cert\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914537 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3bbe9190-62bb-4079-afa7-adc9e970eae6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914676 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hn9\" (UniqueName: \"kubernetes.io/projected/61358eab-20de-46bb-9701-dc736e6eb5ff-kube-api-access-68hn9\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914696 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914715 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw876\" (UniqueName: \"kubernetes.io/projected/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-kube-api-access-zw876\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zht5s\" (UniqueName: \"kubernetes.io/projected/67f000c5-5173-44e3-89e6-446c345a6c05-kube-api-access-zht5s\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914768 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-config\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914786 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914809 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914836 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914874 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-auth-proxy-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914903 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-serving-cert\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914925 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914944 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1dde568-291e-40bf-9df7-18cd5449d0aa-metrics-tls\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914965 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bwbg\" (UniqueName: \"kubernetes.io/projected/f028d7b1-e474-45f8-9c4e-d1b2322175c7-kube-api-access-9bwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.914985 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdv2\" (UniqueName: \"kubernetes.io/projected/3bbe9190-62bb-4079-afa7-adc9e970eae6-kube-api-access-vxdv2\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915030 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9h96\" (UniqueName: \"kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915049 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915084 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70d74e61-4d44-4a6c-8a14-16e131d79e47-trusted-ca\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915106 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l4n2\" (UniqueName: \"kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915123 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-config\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915150 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/202d08f0-f5ea-4414-b2e6-5a690148a823-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915168 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt5vj\" (UniqueName: \"kubernetes.io/projected/af02a63f-5e62-47ff-baf5-1dc1e95dc1ad-kube-api-access-lt5vj\") pod \"downloads-7954f5f757-zt9ng\" (UID: \"af02a63f-5e62-47ff-baf5-1dc1e95dc1ad\") " pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915207 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915238 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915257 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk95h\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-kube-api-access-zk95h\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915274 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915314 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/202d08f0-f5ea-4414-b2e6-5a690148a823-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915329 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f028d7b1-e474-45f8-9c4e-d1b2322175c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915355 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915371 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qwv\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-kube-api-access-j7qwv\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-trusted-ca\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915427 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915451 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d74e61-4d44-4a6c-8a14-16e131d79e47-metrics-tls\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.915467 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f000c5-5173-44e3-89e6-446c345a6c05-machine-approver-tls\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.916338 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.916911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.917831 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.918936 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c958f"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.919184 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7gtg4"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.919880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.920456 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg"] Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.921609 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/202d08f0-f5ea-4414-b2e6-5a690148a823-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.922145 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-config\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.924192 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.925359 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.926427 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.928440 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.928216 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-config\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.928770 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.929320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.929922 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.929926 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.930189 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-auth-proxy-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.930438 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.930447 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f000c5-5173-44e3-89e6-446c345a6c05-config\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.930456 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.930966 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61358eab-20de-46bb-9701-dc736e6eb5ff-trusted-ca\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.932191 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.932474 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.932480 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f028d7b1-e474-45f8-9c4e-d1b2322175c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.932626 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.933316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.933622 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934455 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-serving-cert\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934498 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f028d7b1-e474-45f8-9c4e-d1b2322175c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934752 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3bbe9190-62bb-4079-afa7-adc9e970eae6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934907 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.934996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1dde568-291e-40bf-9df7-18cd5449d0aa-metrics-tls\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.936014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f000c5-5173-44e3-89e6-446c345a6c05-machine-approver-tls\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.936965 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.937349 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.937869 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/202d08f0-f5ea-4414-b2e6-5a690148a823-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.938672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.938772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61358eab-20de-46bb-9701-dc736e6eb5ff-serving-cert\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.940558 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.960181 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.980273 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 13:03:28 crc kubenswrapper[4721]: I0202 13:03:28.999709 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.020370 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.039791 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.060856 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.079597 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.084671 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d74e61-4d44-4a6c-8a14-16e131d79e47-metrics-tls\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.107449 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.113660 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70d74e61-4d44-4a6c-8a14-16e131d79e47-trusted-ca\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.120325 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.140043 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.161012 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.181574 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.199397 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.221149 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.240523 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.261646 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.320629 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.340731 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.360563 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.381332 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.400252 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.419769 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.440796 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.460580 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.481407 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.501144 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.519901 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.540736 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.560009 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.580441 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.599684 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.619737 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.641939 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.660977 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.681627 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.701209 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.720627 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.741230 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.760930 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.782359 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.801029 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.818879 4721 request.go:700] Waited for 1.006878782s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&limit=500&resourceVersion=0 Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.822417 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.841201 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.860919 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.882398 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.935535 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2p87\" (UniqueName: \"kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87\") pod \"route-controller-manager-6576b87f9c-bg49f\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.946401 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sbb\" (UniqueName: \"kubernetes.io/projected/64fc7a32-5852-4e03-b1b7-1663f7f52b65-kube-api-access-q6sbb\") pod \"apiserver-76f77b778f-kcw66\" (UID: \"64fc7a32-5852-4e03-b1b7-1663f7f52b65\") " pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.965124 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nff5\" (UniqueName: \"kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5\") pod \"controller-manager-879f6c89f-ffkjd\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.980747 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6dh\" (UniqueName: \"kubernetes.io/projected/083f0d8a-e0c4-46ae-8993-8547dd260553-kube-api-access-jw6dh\") pod \"machine-api-operator-5694c8668f-pnfph\" (UID: \"083f0d8a-e0c4-46ae-8993-8547dd260553\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:29 crc kubenswrapper[4721]: I0202 13:03:29.995209 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.001014 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.020100 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.040925 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.060958 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.080195 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.100678 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.120922 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.135372 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.141453 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.147883 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.162032 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.174674 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.181514 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.198566 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.201998 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.221823 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.241508 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.261280 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.282407 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.300988 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.321287 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.341435 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.361113 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.381026 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.384955 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pnfph"] Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.395554 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.400636 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.420166 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.440658 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.447755 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kcw66"] Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.450268 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" event={"ID":"8f1e834f-23b5-42a5-9d13-b9e5720a597c","Type":"ContainerStarted","Data":"9dac14241b7592e3b43fe2d27aa1874f518d588eab3c2210074f031e8ca8e1b4"} Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.452834 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" event={"ID":"083f0d8a-e0c4-46ae-8993-8547dd260553","Type":"ContainerStarted","Data":"34ef692467272d3aee99c0d5eab9b2ce3532c1d3b07684dcfeed9326b63792ca"} Feb 02 13:03:30 crc kubenswrapper[4721]: W0202 13:03:30.454190 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fc7a32_5852_4e03_b1b7_1663f7f52b65.slice/crio-6433b629d1bde4bb980befd60ac6efea557893fc4cea82b8bd13eb528ae2b9a2 WatchSource:0}: Error finding container 6433b629d1bde4bb980befd60ac6efea557893fc4cea82b8bd13eb528ae2b9a2: Status 404 returned error can't find the container with id 6433b629d1bde4bb980befd60ac6efea557893fc4cea82b8bd13eb528ae2b9a2 Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.459356 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.465863 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:03:30 crc kubenswrapper[4721]: W0202 13:03:30.474436 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c0670a6_888e_40e3_bf5d_82779e70dd1c.slice/crio-167d1cdbeb24f93927ece3e3fa3df789c23a8308344e8f29012657e06e53e904 WatchSource:0}: Error finding container 167d1cdbeb24f93927ece3e3fa3df789c23a8308344e8f29012657e06e53e904: Status 404 returned error can't find the container with id 167d1cdbeb24f93927ece3e3fa3df789c23a8308344e8f29012657e06e53e904 Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.480931 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.501268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.520352 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.539870 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.559934 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.580639 4721 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.601485 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.620770 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.640197 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.660546 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.680885 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.700726 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.720272 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.757685 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5kv\" (UniqueName: \"kubernetes.io/projected/a1dde568-291e-40bf-9df7-18cd5449d0aa-kube-api-access-8f5kv\") pod \"dns-operator-744455d44c-zg529\" (UID: \"a1dde568-291e-40bf-9df7-18cd5449d0aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.776092 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-bound-sa-token\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.795359 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk95h\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-kube-api-access-zk95h\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.814195 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdv2\" (UniqueName: \"kubernetes.io/projected/3bbe9190-62bb-4079-afa7-adc9e970eae6-kube-api-access-vxdv2\") pod \"cluster-samples-operator-665b6dd947-l49ld\" (UID: \"3bbe9190-62bb-4079-afa7-adc9e970eae6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.819089 4721 request.go:700] Waited for 1.899749954s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.834581 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9h96\" (UniqueName: \"kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96\") pod \"console-f9d7485db-2dsnx\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.852589 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l4n2\" (UniqueName: \"kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2\") pod \"oauth-openshift-558db77b4-fqbhq\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.872591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/202d08f0-f5ea-4414-b2e6-5a690148a823-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pvz9q\" (UID: \"202d08f0-f5ea-4414-b2e6-5a690148a823\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.893101 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt5vj\" (UniqueName: \"kubernetes.io/projected/af02a63f-5e62-47ff-baf5-1dc1e95dc1ad-kube-api-access-lt5vj\") pod \"downloads-7954f5f757-zt9ng\" (UID: \"af02a63f-5e62-47ff-baf5-1dc1e95dc1ad\") " pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.913124 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.917817 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw876\" (UniqueName: \"kubernetes.io/projected/75fc13c2-ccc5-46a0-8a65-d6bc5340baab-kube-api-access-zw876\") pod \"openshift-apiserver-operator-796bbdcf4f-2rjrk\" (UID: \"75fc13c2-ccc5-46a0-8a65-d6bc5340baab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.938226 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zht5s\" (UniqueName: \"kubernetes.io/projected/67f000c5-5173-44e3-89e6-446c345a6c05-kube-api-access-zht5s\") pod \"machine-approver-56656f9798-xbv2j\" (UID: \"67f000c5-5173-44e3-89e6-446c345a6c05\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.957408 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qwv\" (UniqueName: \"kubernetes.io/projected/70d74e61-4d44-4a6c-8a14-16e131d79e47-kube-api-access-j7qwv\") pod \"ingress-operator-5b745b69d9-z25fz\" (UID: \"70d74e61-4d44-4a6c-8a14-16e131d79e47\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.960624 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.978284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bwbg\" (UniqueName: \"kubernetes.io/projected/f028d7b1-e474-45f8-9c4e-d1b2322175c7-kube-api-access-9bwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-td8sr\" (UID: \"f028d7b1-e474-45f8-9c4e-d1b2322175c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:30 crc kubenswrapper[4721]: I0202 13:03:30.984600 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:30.996813 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.004349 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.008955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/4a636bbb-70b8-4b2a-96c6-94f9edba40cc-kube-api-access-9kth9\") pod \"authentication-operator-69f744f599-jrfhj\" (UID: \"4a636bbb-70b8-4b2a-96c6-94f9edba40cc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.016922 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hn9\" (UniqueName: \"kubernetes.io/projected/61358eab-20de-46bb-9701-dc736e6eb5ff-kube-api-access-68hn9\") pod \"console-operator-58897d9998-f4l8v\" (UID: \"61358eab-20de-46bb-9701-dc736e6eb5ff\") " pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.026308 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040155 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxt2\" (UniqueName: \"kubernetes.io/projected/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-kube-api-access-tjxt2\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040234 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-client\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040266 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040337 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74f12-1bed-4744-9dec-57282d5301eb-serving-cert\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040359 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-dir\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040408 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-default-certificate\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040434 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-service-ca-bundle\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040483 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-stats-auth\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040560 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-policies\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040608 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-serving-cert\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040662 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040685 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-serving-cert\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040734 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccp75\" (UniqueName: \"kubernetes.io/projected/ab4249b9-1751-45d6-be3f-58668c4542bd-kube-api-access-ccp75\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040803 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040833 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2a74f12-1bed-4744-9dec-57282d5301eb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040857 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040919 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040945 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040970 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-config\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.040995 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041017 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-service-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041048 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-encryption-config\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041153 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041214 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpq2\" (UniqueName: \"kubernetes.io/projected/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-kube-api-access-rlpq2\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041238 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-client\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041261 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041284 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwm4n\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041346 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-metrics-certs\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.041381 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds87p\" (UniqueName: \"kubernetes.io/projected/f2a74f12-1bed-4744-9dec-57282d5301eb-kube-api-access-ds87p\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.044217 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:31.544202177 +0000 UTC m=+151.846716566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.059228 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.082125 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.112183 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.126548 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.126729 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqbhq"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.131659 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142177 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142474 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/46f85b66-5a30-4bef-909c-26750b18e72d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.142545 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:31.642523692 +0000 UTC m=+151.945038161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142577 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-mountpoint-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142628 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142648 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-socket-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142674 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-node-bootstrap-token\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142695 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w84l4\" (UniqueName: \"kubernetes.io/projected/ab926544-a708-445a-aaf9-0e3ad4593676-kube-api-access-w84l4\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142753 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-encryption-config\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142788 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142805 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-client\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142823 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4sll\" (UniqueName: \"kubernetes.io/projected/63b0f1ec-c2d9-4005-ba10-839949dbbcac-kube-api-access-n4sll\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142840 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7grx\" (UniqueName: \"kubernetes.io/projected/6104d27e-fefa-4e2a-9b9e-62013c96f664-kube-api-access-m7grx\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142869 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-proxy-tls\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.142926 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/625494cc-c7b0-4a0a-811c-d4822b1c0acc-config-volume\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.143719 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a98572f-0fab-4dab-9935-6bf52cdc7fff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926544-a708-445a-aaf9-0e3ad4593676-serving-cert\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144354 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a1d56e-00d0-4e88-bdfb-461578e327e6-config\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144393 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-cert\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144417 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6104d27e-fefa-4e2a-9b9e-62013c96f664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds87p\" (UniqueName: \"kubernetes.io/projected/f2a74f12-1bed-4744-9dec-57282d5301eb-kube-api-access-ds87p\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144471 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-plugins-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144529 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-registration-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144559 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144587 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a1d56e-00d0-4e88-bdfb-461578e327e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144654 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wj4v\" (UniqueName: \"kubernetes.io/projected/46f85b66-5a30-4bef-909c-26750b18e72d-kube-api-access-5wj4v\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144680 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vzjw\" (UniqueName: \"kubernetes.io/projected/911d6570-6a65-42b8-a562-3e1ccdc8d562-kube-api-access-9vzjw\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144703 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrsnv\" (UniqueName: \"kubernetes.io/projected/c5b5487e-8a60-4967-b0f3-1d983c559f8a-kube-api-access-zrsnv\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.144756 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145197 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-dir\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjb78\" (UniqueName: \"kubernetes.io/projected/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-kube-api-access-gjb78\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b31753-6a52-4364-b01f-9d50aeac7c13-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-default-certificate\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145312 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-service-ca-bundle\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-stats-auth\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145359 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926544-a708-445a-aaf9-0e3ad4593676-config\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-policies\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145403 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzzr\" (UniqueName: \"kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvpn\" (UniqueName: \"kubernetes.io/projected/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-kube-api-access-6hvpn\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145451 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-profile-collector-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145475 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-serving-cert\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145498 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/debfaa13-5820-4570-a447-8ef48903144c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145523 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145562 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b31753-6a52-4364-b01f-9d50aeac7c13-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145591 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145615 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145666 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145689 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0a1d56e-00d0-4e88-bdfb-461578e327e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145715 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b31753-6a52-4364-b01f-9d50aeac7c13-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a98572f-0fab-4dab-9935-6bf52cdc7fff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145777 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflp4\" (UniqueName: \"kubernetes.io/projected/625494cc-c7b0-4a0a-811c-d4822b1c0acc-kube-api-access-zflp4\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145802 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qsg\" (UniqueName: \"kubernetes.io/projected/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-kube-api-access-p2qsg\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145865 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-config\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145940 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145965 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.145988 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/911d6570-6a65-42b8-a562-3e1ccdc8d562-proxy-tls\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146010 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-config\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146051 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-service-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146091 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-srv-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146118 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpq2\" (UniqueName: \"kubernetes.io/projected/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-kube-api-access-rlpq2\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-cabundle\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146201 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwm4n\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146255 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkxj7\" (UniqueName: \"kubernetes.io/projected/bace5e2a-2c1a-433c-bc00-9121a45aa515-kube-api-access-jkxj7\") pod \"migrator-59844c95c7-qr8r7\" (UID: \"bace5e2a-2c1a-433c-bc00-9121a45aa515\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146316 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbkx\" (UniqueName: \"kubernetes.io/projected/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-kube-api-access-4lbkx\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.146422 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147031 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-certs\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147166 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5487e-8a60-4967-b0f3-1d983c559f8a-tmpfs\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147189 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x7n\" (UniqueName: \"kubernetes.io/projected/37b1d658-0b12-4afd-9b42-9f54f553d432-kube-api-access-s5x7n\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147212 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-metrics-certs\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147230 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksrl\" (UniqueName: \"kubernetes.io/projected/debfaa13-5820-4570-a447-8ef48903144c-kube-api-access-bksrl\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147324 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625494cc-c7b0-4a0a-811c-d4822b1c0acc-metrics-tls\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjxt2\" (UniqueName: \"kubernetes.io/projected/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-kube-api-access-tjxt2\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147587 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147823 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-client\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.147855 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvfb\" (UniqueName: \"kubernetes.io/projected/0a98572f-0fab-4dab-9935-6bf52cdc7fff-kube-api-access-hkvfb\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-images\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154337 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154369 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-key\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154414 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74f12-1bed-4744-9dec-57282d5301eb-serving-cert\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsw9p\" (UniqueName: \"kubernetes.io/projected/b06f9eba-0e3d-47fb-a386-a166987e78fd-kube-api-access-hsw9p\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154488 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccp75\" (UniqueName: \"kubernetes.io/projected/ab4249b9-1751-45d6-be3f-58668c4542bd-kube-api-access-ccp75\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154509 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-serving-cert\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154538 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-srv-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154565 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2a74f12-1bed-4744-9dec-57282d5301eb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154725 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-stats-auth\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.149423 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-policies\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.148641 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.149833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-service-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.148978 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab4249b9-1751-45d6-be3f-58668c4542bd-audit-dir\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.148762 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-service-ca-bundle\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.150256 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.148779 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-config\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.149776 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.152386 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-metrics-certs\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.154059 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-etcd-client\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.155974 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2a74f12-1bed-4744-9dec-57282d5301eb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.157327 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-serving-cert\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.158237 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-serving-cert\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.158446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-client\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.159305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.159363 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.159385 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.159407 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-csi-data-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.159588 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab4249b9-1751-45d6-be3f-58668c4542bd-encryption-config\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.160276 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-etcd-ca\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.160419 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-default-certificate\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.160909 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.160920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.163708 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74f12-1bed-4744-9dec-57282d5301eb-serving-cert\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.218408 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.224457 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwm4n\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.237619 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpq2\" (UniqueName: \"kubernetes.io/projected/4a107c9e-3f0e-4d4b-8b02-65bda6793a2d-kube-api-access-rlpq2\") pod \"etcd-operator-b45778765-2jzch\" (UID: \"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.271542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds87p\" (UniqueName: \"kubernetes.io/projected/f2a74f12-1bed-4744-9dec-57282d5301eb-kube-api-access-ds87p\") pod \"openshift-config-operator-7777fb866f-fgg7h\" (UID: \"f2a74f12-1bed-4744-9dec-57282d5301eb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.271885 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.272940 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvfb\" (UniqueName: \"kubernetes.io/projected/0a98572f-0fab-4dab-9935-6bf52cdc7fff-kube-api-access-hkvfb\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.272969 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-images\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273006 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-key\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273044 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsw9p\" (UniqueName: \"kubernetes.io/projected/b06f9eba-0e3d-47fb-a386-a166987e78fd-kube-api-access-hsw9p\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-srv-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273141 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273200 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-csi-data-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273223 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273243 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/46f85b66-5a30-4bef-909c-26750b18e72d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273294 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-mountpoint-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273315 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273849 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-socket-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273880 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-node-bootstrap-token\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w84l4\" (UniqueName: \"kubernetes.io/projected/ab926544-a708-445a-aaf9-0e3ad4593676-kube-api-access-w84l4\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.273959 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4sll\" (UniqueName: \"kubernetes.io/projected/63b0f1ec-c2d9-4005-ba10-839949dbbcac-kube-api-access-n4sll\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274006 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7grx\" (UniqueName: \"kubernetes.io/projected/6104d27e-fefa-4e2a-9b9e-62013c96f664-kube-api-access-m7grx\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-csi-data-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274035 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-proxy-tls\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274119 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a98572f-0fab-4dab-9935-6bf52cdc7fff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/625494cc-c7b0-4a0a-811c-d4822b1c0acc-config-volume\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274182 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926544-a708-445a-aaf9-0e3ad4593676-serving-cert\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274200 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-cert\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274217 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a1d56e-00d0-4e88-bdfb-461578e327e6-config\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274272 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6104d27e-fefa-4e2a-9b9e-62013c96f664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-plugins-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274308 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-registration-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274348 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a1d56e-00d0-4e88-bdfb-461578e327e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274370 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274390 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274428 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vzjw\" (UniqueName: \"kubernetes.io/projected/911d6570-6a65-42b8-a562-3e1ccdc8d562-kube-api-access-9vzjw\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wj4v\" (UniqueName: \"kubernetes.io/projected/46f85b66-5a30-4bef-909c-26750b18e72d-kube-api-access-5wj4v\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274461 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrsnv\" (UniqueName: \"kubernetes.io/projected/c5b5487e-8a60-4967-b0f3-1d983c559f8a-kube-api-access-zrsnv\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274504 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjb78\" (UniqueName: \"kubernetes.io/projected/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-kube-api-access-gjb78\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274525 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b31753-6a52-4364-b01f-9d50aeac7c13-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274547 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926544-a708-445a-aaf9-0e3ad4593676-config\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274590 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzzr\" (UniqueName: \"kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274615 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvpn\" (UniqueName: \"kubernetes.io/projected/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-kube-api-access-6hvpn\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274664 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/debfaa13-5820-4570-a447-8ef48903144c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274691 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-profile-collector-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274709 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b31753-6a52-4364-b01f-9d50aeac7c13-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274750 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274767 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274782 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274798 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b31753-6a52-4364-b01f-9d50aeac7c13-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274838 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0a1d56e-00d0-4e88-bdfb-461578e327e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a98572f-0fab-4dab-9935-6bf52cdc7fff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274869 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflp4\" (UniqueName: \"kubernetes.io/projected/625494cc-c7b0-4a0a-811c-d4822b1c0acc-kube-api-access-zflp4\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274907 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qsg\" (UniqueName: \"kubernetes.io/projected/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-kube-api-access-p2qsg\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274926 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/911d6570-6a65-42b8-a562-3e1ccdc8d562-proxy-tls\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.274989 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-config\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-srv-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275024 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-cabundle\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275056 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275084 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkxj7\" (UniqueName: \"kubernetes.io/projected/bace5e2a-2c1a-433c-bc00-9121a45aa515-kube-api-access-jkxj7\") pod \"migrator-59844c95c7-qr8r7\" (UID: \"bace5e2a-2c1a-433c-bc00-9121a45aa515\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275102 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-certs\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275116 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbkx\" (UniqueName: \"kubernetes.io/projected/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-kube-api-access-4lbkx\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275152 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5487e-8a60-4967-b0f3-1d983c559f8a-tmpfs\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275166 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x7n\" (UniqueName: \"kubernetes.io/projected/37b1d658-0b12-4afd-9b42-9f54f553d432-kube-api-access-s5x7n\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275184 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275225 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksrl\" (UniqueName: \"kubernetes.io/projected/debfaa13-5820-4570-a447-8ef48903144c-kube-api-access-bksrl\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275245 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625494cc-c7b0-4a0a-811c-d4822b1c0acc-metrics-tls\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.275524 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.282710 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-proxy-tls\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.283484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-config\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.284680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5487e-8a60-4967-b0f3-1d983c559f8a-tmpfs\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.285299 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-socket-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.285605 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-cabundle\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.285694 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-mountpoint-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.285897 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:31.785620305 +0000 UTC m=+152.088134694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.287175 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-srv-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.288439 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-images\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.296550 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37b1d658-0b12-4afd-9b42-9f54f553d432-signing-key\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.296724 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/911d6570-6a65-42b8-a562-3e1ccdc8d562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.297850 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-certs\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.300550 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.304814 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.309048 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.309467 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccp75\" (UniqueName: \"kubernetes.io/projected/ab4249b9-1751-45d6-be3f-58668c4542bd-kube-api-access-ccp75\") pod \"apiserver-7bbb656c7d-p9lvq\" (UID: \"ab4249b9-1751-45d6-be3f-58668c4542bd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.309899 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a98572f-0fab-4dab-9935-6bf52cdc7fff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.314515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/625494cc-c7b0-4a0a-811c-d4822b1c0acc-config-volume\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.315247 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a1d56e-00d0-4e88-bdfb-461578e327e6-config\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.315417 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-plugins-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.315492 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6104d27e-fefa-4e2a-9b9e-62013c96f664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.315697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-registration-dir\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.315900 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926544-a708-445a-aaf9-0e3ad4593676-config\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.317745 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.318003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-profile-collector-cert\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.318907 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b31753-6a52-4364-b01f-9d50aeac7c13-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.320696 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b31753-6a52-4364-b01f-9d50aeac7c13-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.324754 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-cert\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.325861 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a1d56e-00d0-4e88-bdfb-461578e327e6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.327617 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625494cc-c7b0-4a0a-811c-d4822b1c0acc-metrics-tls\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.327958 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/911d6570-6a65-42b8-a562-3e1ccdc8d562-proxy-tls\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.328590 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63b0f1ec-c2d9-4005-ba10-839949dbbcac-node-bootstrap-token\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.330646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a98572f-0fab-4dab-9935-6bf52cdc7fff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.330917 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5b5487e-8a60-4967-b0f3-1d983c559f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.331143 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjxt2\" (UniqueName: \"kubernetes.io/projected/b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb-kube-api-access-tjxt2\") pod \"router-default-5444994796-7vhgv\" (UID: \"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb\") " pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.331163 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926544-a708-445a-aaf9-0e3ad4593676-serving-cert\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.331689 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-srv-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.332383 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.332832 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/debfaa13-5820-4570-a447-8ef48903144c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.337181 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/46f85b66-5a30-4bef-909c-26750b18e72d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.337992 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvfb\" (UniqueName: \"kubernetes.io/projected/0a98572f-0fab-4dab-9935-6bf52cdc7fff-kube-api-access-hkvfb\") pod \"kube-storage-version-migrator-operator-b67b599dd-6wdm7\" (UID: \"0a98572f-0fab-4dab-9935-6bf52cdc7fff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.346252 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b06f9eba-0e3d-47fb-a386-a166987e78fd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.353045 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.367604 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.368557 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvpn\" (UniqueName: \"kubernetes.io/projected/53727b9f-9f5f-4f6e-8fa2-a6018c8225f5-kube-api-access-6hvpn\") pod \"ingress-canary-kq22p\" (UID: \"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5\") " pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.374317 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zt9ng"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.378416 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.378954 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.379049 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:31.879034422 +0000 UTC m=+152.181548811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.391006 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.397778 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkxj7\" (UniqueName: \"kubernetes.io/projected/bace5e2a-2c1a-433c-bc00-9121a45aa515-kube-api-access-jkxj7\") pod \"migrator-59844c95c7-qr8r7\" (UID: \"bace5e2a-2c1a-433c-bc00-9121a45aa515\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.398549 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsw9p\" (UniqueName: \"kubernetes.io/projected/b06f9eba-0e3d-47fb-a386-a166987e78fd-kube-api-access-hsw9p\") pod \"olm-operator-6b444d44fb-hlnsv\" (UID: \"b06f9eba-0e3d-47fb-a386-a166987e78fd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.420815 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.435906 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.431553 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbkx\" (UniqueName: \"kubernetes.io/projected/2de0eb97-f51f-4468-9a68-eb9d6a7ce40d-kube-api-access-4lbkx\") pod \"machine-config-controller-84d6567774-f5gbr\" (UID: \"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.444963 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.448319 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w84l4\" (UniqueName: \"kubernetes.io/projected/ab926544-a708-445a-aaf9-0e3ad4593676-kube-api-access-w84l4\") pod \"service-ca-operator-777779d784-csktx\" (UID: \"ab926544-a708-445a-aaf9-0e3ad4593676\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.451433 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.459041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4sll\" (UniqueName: \"kubernetes.io/projected/63b0f1ec-c2d9-4005-ba10-839949dbbcac-kube-api-access-n4sll\") pod \"machine-config-server-8t7x8\" (UID: \"63b0f1ec-c2d9-4005-ba10-839949dbbcac\") " pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.460704 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" event={"ID":"3c0670a6-888e-40e3-bf5d-82779e70dd1c","Type":"ContainerStarted","Data":"d435dac504fe3034e5527129f376c6ed65b5ea3e1fe83d1eb8463d6282795a18"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.460752 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" event={"ID":"3c0670a6-888e-40e3-bf5d-82779e70dd1c","Type":"ContainerStarted","Data":"167d1cdbeb24f93927ece3e3fa3df789c23a8308344e8f29012657e06e53e904"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.461268 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.464254 4721 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ffkjd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.464301 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.475338 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.475821 4721 generic.go:334] "Generic (PLEG): container finished" podID="64fc7a32-5852-4e03-b1b7-1663f7f52b65" containerID="a4cef40e8a1ba99ffc6b7c7bf185e7d2180fb0fbc1aa0082f5c2fcea210eadb0" exitCode=0 Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.475986 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" event={"ID":"64fc7a32-5852-4e03-b1b7-1663f7f52b65","Type":"ContainerDied","Data":"a4cef40e8a1ba99ffc6b7c7bf185e7d2180fb0fbc1aa0082f5c2fcea210eadb0"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.476038 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" event={"ID":"64fc7a32-5852-4e03-b1b7-1663f7f52b65","Type":"ContainerStarted","Data":"6433b629d1bde4bb980befd60ac6efea557893fc4cea82b8bd13eb528ae2b9a2"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.478649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7grx\" (UniqueName: \"kubernetes.io/projected/6104d27e-fefa-4e2a-9b9e-62013c96f664-kube-api-access-m7grx\") pod \"multus-admission-controller-857f4d67dd-c68d5\" (UID: \"6104d27e-fefa-4e2a-9b9e-62013c96f664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.480711 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.481374 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:31.98135161 +0000 UTC m=+152.283865999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.490552 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" event={"ID":"8f1e834f-23b5-42a5-9d13-b9e5720a597c","Type":"ContainerStarted","Data":"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.491192 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.492467 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" event={"ID":"67f000c5-5173-44e3-89e6-446c345a6c05","Type":"ContainerStarted","Data":"85b5ad265fd3a6bc88b82d6a0a25866b976d9a961360c1827fefaea8446a285c"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.503161 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kq22p" Feb 02 13:03:31 crc kubenswrapper[4721]: W0202 13:03:31.530613 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf02a63f_5e62_47ff_baf5_1dc1e95dc1ad.slice/crio-72a9519fa163c4d5e947e7bea8efe4073413ddbd4dcca1685658420850559431 WatchSource:0}: Error finding container 72a9519fa163c4d5e947e7bea8efe4073413ddbd4dcca1685658420850559431: Status 404 returned error can't find the container with id 72a9519fa163c4d5e947e7bea8efe4073413ddbd4dcca1685658420850559431 Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.538458 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8t7x8" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.543715 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x7n\" (UniqueName: \"kubernetes.io/projected/37b1d658-0b12-4afd-9b42-9f54f553d432-kube-api-access-s5x7n\") pod \"service-ca-9c57cc56f-r47km\" (UID: \"37b1d658-0b12-4afd-9b42-9f54f553d432\") " pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.548890 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.553992 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksrl\" (UniqueName: \"kubernetes.io/projected/debfaa13-5820-4570-a447-8ef48903144c-kube-api-access-bksrl\") pod \"package-server-manager-789f6589d5-2wzzt\" (UID: \"debfaa13-5820-4570-a447-8ef48903144c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.558155 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" event={"ID":"083f0d8a-e0c4-46ae-8993-8547dd260553","Type":"ContainerStarted","Data":"533a6f959716e9909cf0141a19ed1a201052ec95a0fcc8a95909ea7a1c40cad5"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.558278 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" event={"ID":"083f0d8a-e0c4-46ae-8993-8547dd260553","Type":"ContainerStarted","Data":"0ce15c1c8fdb67f5b17f39530bbd9d91767933a832ff9983c81c0606885a8b64"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.577519 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrsnv\" (UniqueName: \"kubernetes.io/projected/c5b5487e-8a60-4967-b0f3-1d983c559f8a-kube-api-access-zrsnv\") pod \"packageserver-d55dfcdfc-krxdl\" (UID: \"c5b5487e-8a60-4967-b0f3-1d983c559f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.586022 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.586211 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.08618684 +0000 UTC m=+152.388701229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.586564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.586673 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.587117 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.087098835 +0000 UTC m=+152.389613224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.593915 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wj4v\" (UniqueName: \"kubernetes.io/projected/46f85b66-5a30-4bef-909c-26750b18e72d-kube-api-access-5wj4v\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdzwk\" (UID: \"46f85b66-5a30-4bef-909c-26750b18e72d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.594810 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7vhgv" event={"ID":"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb","Type":"ContainerStarted","Data":"27861f5afd594c7c97e253d8a6c6893c1f11a45ddd73eafa17299df4c668a81e"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.598275 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" event={"ID":"962524c6-7992-43d5-a7f3-5fdd04297f24","Type":"ContainerStarted","Data":"e1c6b11699215c240779ba4ffc084b0f044db3750d6c816f2d805a78f36b24e5"} Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.615184 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vzjw\" (UniqueName: \"kubernetes.io/projected/911d6570-6a65-42b8-a562-3e1ccdc8d562-kube-api-access-9vzjw\") pod \"machine-config-operator-74547568cd-c958f\" (UID: \"911d6570-6a65-42b8-a562-3e1ccdc8d562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.619834 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b31753-6a52-4364-b01f-9d50aeac7c13-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-78nss\" (UID: \"32b31753-6a52-4364-b01f-9d50aeac7c13\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.637901 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zg529"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.647643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0a1d56e-00d0-4e88-bdfb-461578e327e6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w9gmp\" (UID: \"a0a1d56e-00d0-4e88-bdfb-461578e327e6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.658405 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkplf\" (UID: \"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.661346 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f\") pod \"marketplace-operator-79b997595-zcf44\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.687323 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflp4\" (UniqueName: \"kubernetes.io/projected/625494cc-c7b0-4a0a-811c-d4822b1c0acc-kube-api-access-zflp4\") pod \"dns-default-kh9ph\" (UID: \"625494cc-c7b0-4a0a-811c-d4822b1c0acc\") " pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.687771 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.689034 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.189004302 +0000 UTC m=+152.491518691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.695443 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.704426 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.704671 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qsg\" (UniqueName: \"kubernetes.io/projected/6dc5b220-3e84-4a0a-9f7a-f27a007436f6-kube-api-access-p2qsg\") pod \"csi-hostpathplugin-7gtg4\" (UID: \"6dc5b220-3e84-4a0a-9f7a-f27a007436f6\") " pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.708875 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.716181 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.728468 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.732458 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjb78\" (UniqueName: \"kubernetes.io/projected/cb2244ea-f203-4f66-9a4d-aad5e58a5c46-kube-api-access-gjb78\") pod \"catalog-operator-68c6474976-bz9nm\" (UID: \"cb2244ea-f203-4f66-9a4d-aad5e58a5c46\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.736381 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.742630 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.747741 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.764846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-r47km" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.765886 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.769775 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzzr\" (UniqueName: \"kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr\") pod \"collect-profiles-29500620-rxhcg\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.784408 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.789551 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.790059 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.790438 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.290422335 +0000 UTC m=+152.592936724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.798346 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.831599 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.848347 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.848998 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.888888 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrfhj"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.891059 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.891424 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.391410236 +0000 UTC m=+152.693924625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.917122 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.917585 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.943925 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:03:31 crc kubenswrapper[4721]: I0202 13:03:31.993134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:31 crc kubenswrapper[4721]: E0202 13:03:31.993585 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.493562429 +0000 UTC m=+152.796077008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: W0202 13:03:32.003529 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf028d7b1_e474_45f8_9c4e_d1b2322175c7.slice/crio-80061aa979518cc3deff6b9b39f3aef1ce5a33b6cbfaffcec4325f4ed5c8e70e WatchSource:0}: Error finding container 80061aa979518cc3deff6b9b39f3aef1ce5a33b6cbfaffcec4325f4ed5c8e70e: Status 404 returned error can't find the container with id 80061aa979518cc3deff6b9b39f3aef1ce5a33b6cbfaffcec4325f4ed5c8e70e Feb 02 13:03:32 crc kubenswrapper[4721]: W0202 13:03:32.027191 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75fc13c2_ccc5_46a0_8a65_d6bc5340baab.slice/crio-e908c7ce1d810feb7859e0cf4d35f00393b9ea94e002a59b76c024bc0ace56e9 WatchSource:0}: Error finding container e908c7ce1d810feb7859e0cf4d35f00393b9ea94e002a59b76c024bc0ace56e9: Status 404 returned error can't find the container with id e908c7ce1d810feb7859e0cf4d35f00393b9ea94e002a59b76c024bc0ace56e9 Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.061637 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.084492 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.099973 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.100137 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.600107787 +0000 UTC m=+152.902622176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.100371 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.100752 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.600736535 +0000 UTC m=+152.903250924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.114931 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f4l8v"] Feb 02 13:03:32 crc kubenswrapper[4721]: W0202 13:03:32.135575 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae3f417e_2bae_44dd_973f_5314b6f64972.slice/crio-03d46b30c74ef5c8430b448c7ad678889af4272d7acbd48aceae6628fd4f71b5 WatchSource:0}: Error finding container 03d46b30c74ef5c8430b448c7ad678889af4272d7acbd48aceae6628fd4f71b5: Status 404 returned error can't find the container with id 03d46b30c74ef5c8430b448c7ad678889af4272d7acbd48aceae6628fd4f71b5 Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.201818 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.202126 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.702112227 +0000 UTC m=+153.004626606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.209294 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.209696 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.224713 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2jzch"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.228020 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7"] Feb 02 13:03:32 crc kubenswrapper[4721]: W0202 13:03:32.255498 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbace5e2a_2c1a_433c_bc00_9121a45aa515.slice/crio-37015924ad96e03e0f661a4fba73db346b9adb1eb9a8b110fa18f4af486c1aca WatchSource:0}: Error finding container 37015924ad96e03e0f661a4fba73db346b9adb1eb9a8b110fa18f4af486c1aca: Status 404 returned error can't find the container with id 37015924ad96e03e0f661a4fba73db346b9adb1eb9a8b110fa18f4af486c1aca Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.300623 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.303436 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.304057 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.804046554 +0000 UTC m=+153.106560943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.404767 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.405266 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:32.905248211 +0000 UTC m=+153.207762600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.511475 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.512486 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.012470447 +0000 UTC m=+153.314984826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.517331 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.562296 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" podStartSLOduration=127.562277579 podStartE2EDuration="2m7.562277579s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:32.524456287 +0000 UTC m=+152.826970666" watchObservedRunningTime="2026-02-02 13:03:32.562277579 +0000 UTC m=+152.864791968" Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.579467 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kq22p"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.614054 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.614576 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.114555719 +0000 UTC m=+153.417070108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.634701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" event={"ID":"4a636bbb-70b8-4b2a-96c6-94f9edba40cc","Type":"ContainerStarted","Data":"a33996efe12dac3d19d0fa4037ecc635f6ba9b302da27d202a6e47235462ef26"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.654836 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" event={"ID":"75fc13c2-ccc5-46a0-8a65-d6bc5340baab","Type":"ContainerStarted","Data":"e908c7ce1d810feb7859e0cf4d35f00393b9ea94e002a59b76c024bc0ace56e9"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.668240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" event={"ID":"67f000c5-5173-44e3-89e6-446c345a6c05","Type":"ContainerStarted","Data":"3a1667a5a46eba8d886a8fbbd8d7f7a9a2bc174ba18feb3a4ef74f6861bf3271"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.668297 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" event={"ID":"67f000c5-5173-44e3-89e6-446c345a6c05","Type":"ContainerStarted","Data":"92082378009f2e00a893951b41ea2c82fc949ed54229ccbc4da67e0536e3538c"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.674044 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dsnx" event={"ID":"ae3f417e-2bae-44dd-973f-5314b6f64972","Type":"ContainerStarted","Data":"03d46b30c74ef5c8430b448c7ad678889af4272d7acbd48aceae6628fd4f71b5"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.700164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8t7x8" event={"ID":"63b0f1ec-c2d9-4005-ba10-839949dbbcac","Type":"ContainerStarted","Data":"516ab5e09d3a567d7aa34953df58fe969f47679e7e5877e0ef18bd895e330728"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.700230 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8t7x8" event={"ID":"63b0f1ec-c2d9-4005-ba10-839949dbbcac","Type":"ContainerStarted","Data":"9f99838276207803d8e457f0e4737ba0b54dcc0e9d633a51cbe24ccd90f651a3"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.715874 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.716428 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.216405825 +0000 UTC m=+153.518920214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.727088 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zt9ng" event={"ID":"af02a63f-5e62-47ff-baf5-1dc1e95dc1ad","Type":"ContainerStarted","Data":"bdc7f242349ef501ae60453f7027fd3b9ab0c5fdb0624d98e48543734baa6123"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.727132 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zt9ng" event={"ID":"af02a63f-5e62-47ff-baf5-1dc1e95dc1ad","Type":"ContainerStarted","Data":"72a9519fa163c4d5e947e7bea8efe4073413ddbd4dcca1685658420850559431"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.728002 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.748040 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" event={"ID":"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d","Type":"ContainerStarted","Data":"204dbbb074af32312956d8c5808e584109e5348e60b7f0ba197a03fb8d7567c5"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.750202 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" event={"ID":"962524c6-7992-43d5-a7f3-5fdd04297f24","Type":"ContainerStarted","Data":"81f1ed113ff28d261e45b8f089ff55331cfa48c5e28c0300ad3ede4e1aa70b95"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.753181 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.760491 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" event={"ID":"61358eab-20de-46bb-9701-dc736e6eb5ff","Type":"ContainerStarted","Data":"533d18c71783fb7d55dc1e3ccbdf776ca8bae4e98dcbd5fd2e6684297c6d4fb5"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.764351 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" event={"ID":"70d74e61-4d44-4a6c-8a14-16e131d79e47","Type":"ContainerStarted","Data":"9077b0d5727400ff737670e938f50acb1eeaf69ec2d4125726b027b461a0bd7d"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.778157 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.781449 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" event={"ID":"202d08f0-f5ea-4414-b2e6-5a690148a823","Type":"ContainerStarted","Data":"5677dea813e91cff0125caea41dbe47841266e23be8dbe14bda6d75d556e4c6c"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.781475 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" event={"ID":"202d08f0-f5ea-4414-b2e6-5a690148a823","Type":"ContainerStarted","Data":"1d70ffe0b3734085db9dfe928cc8392846bfd74b2233917dacca3de2113ee019"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.799528 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7vhgv" event={"ID":"b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb","Type":"ContainerStarted","Data":"85c792eb18ace81b853014dd66f8dc7071938962eedf924be3e1db2064da8739"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.802699 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" event={"ID":"a1dde568-291e-40bf-9df7-18cd5449d0aa","Type":"ContainerStarted","Data":"8d1176f054af953add2a2532e07fd308e898dabf738f3dad9dfa3ffcd452a99d"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.803795 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" event={"ID":"3bbe9190-62bb-4079-afa7-adc9e970eae6","Type":"ContainerStarted","Data":"59fdf7e0fd87927cd832a352d934961ab5ba72d93251892b8dde5809f3cb586d"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.805633 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" event={"ID":"bace5e2a-2c1a-433c-bc00-9121a45aa515","Type":"ContainerStarted","Data":"37015924ad96e03e0f661a4fba73db346b9adb1eb9a8b110fa18f4af486c1aca"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.807599 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" event={"ID":"f028d7b1-e474-45f8-9c4e-d1b2322175c7","Type":"ContainerStarted","Data":"80061aa979518cc3deff6b9b39f3aef1ce5a33b6cbfaffcec4325f4ed5c8e70e"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.812836 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.816798 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.821595 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.321573243 +0000 UTC m=+153.624087632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.823555 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" event={"ID":"0a98572f-0fab-4dab-9935-6bf52cdc7fff","Type":"ContainerStarted","Data":"c27507bd9c98546b987f4412d4fcbe150bb83423cdf6072cd7347f04919d8804"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.825756 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" event={"ID":"ab4249b9-1751-45d6-be3f-58668c4542bd","Type":"ContainerStarted","Data":"7f732d595674c6b44c99146a616adea5454f90e5957ad804ba076c7a4cc56ed6"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.828690 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" event={"ID":"f2a74f12-1bed-4744-9dec-57282d5301eb","Type":"ContainerStarted","Data":"665a012355c7f53b5297723bfd7f4326c471d3e631d117239a2f21c841495568"} Feb 02 13:03:32 crc kubenswrapper[4721]: I0202 13:03:32.919756 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:32 crc kubenswrapper[4721]: E0202 13:03:32.928617 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.424026655 +0000 UTC m=+153.726541044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.018686 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-zt9ng container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.018735 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zt9ng" podUID="af02a63f-5e62-47ff-baf5-1dc1e95dc1ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.019037 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" podStartSLOduration=127.019026897 podStartE2EDuration="2m7.019026897s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:32.956492436 +0000 UTC m=+153.259006845" watchObservedRunningTime="2026-02-02 13:03:33.019026897 +0000 UTC m=+153.321541286" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.021372 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.021921 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.022459 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.522436584 +0000 UTC m=+153.824950973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.123686 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.124296 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.624282298 +0000 UTC m=+153.926796687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.229737 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.230122 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.730091866 +0000 UTC m=+154.032606255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.332156 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.332896 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.832875827 +0000 UTC m=+154.135390226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.395747 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.433377 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.433876 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:33.933849257 +0000 UTC m=+154.236363646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.537541 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.537904 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.037889895 +0000 UTC m=+154.340404284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.537912 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pnfph" podStartSLOduration=128.537899395 podStartE2EDuration="2m8.537899395s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.537015559 +0000 UTC m=+153.839529948" watchObservedRunningTime="2026-02-02 13:03:33.537899395 +0000 UTC m=+153.840413794" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.638793 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.639443 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.13940958 +0000 UTC m=+154.441923969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.639911 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.640339 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.140309006 +0000 UTC m=+154.442823395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.663660 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:33 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:33 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:33 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.663721 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.740937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.741529 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.241513422 +0000 UTC m=+154.544027811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.766939 4721 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fqbhq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.766987 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.826413 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zt9ng" podStartSLOduration=128.826387947 podStartE2EDuration="2m8.826387947s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.766716886 +0000 UTC m=+154.069231275" watchObservedRunningTime="2026-02-02 13:03:33.826387947 +0000 UTC m=+154.128902336" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.842979 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.843403 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.343391108 +0000 UTC m=+154.645905497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.886093 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8t7x8" podStartSLOduration=5.886056386 podStartE2EDuration="5.886056386s" podCreationTimestamp="2026-02-02 13:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.833392675 +0000 UTC m=+154.135907084" watchObservedRunningTime="2026-02-02 13:03:33.886056386 +0000 UTC m=+154.188570785" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.887509 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7vhgv" podStartSLOduration=128.887504237 podStartE2EDuration="2m8.887504237s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.886368856 +0000 UTC m=+154.188883245" watchObservedRunningTime="2026-02-02 13:03:33.887504237 +0000 UTC m=+154.190018626" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.891102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" event={"ID":"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc","Type":"ContainerStarted","Data":"8de304604797617bd37941c34b8dadab3dda32a570ac8404e01cefa5aa3f8bd2"} Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.924494 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pvz9q" podStartSLOduration=128.924460835 podStartE2EDuration="2m8.924460835s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.914055159 +0000 UTC m=+154.216569548" watchObservedRunningTime="2026-02-02 13:03:33.924460835 +0000 UTC m=+154.226975234" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.933487 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt"] Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.946647 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:33 crc kubenswrapper[4721]: E0202 13:03:33.946925 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.446910861 +0000 UTC m=+154.749425250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.949190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" event={"ID":"bace5e2a-2c1a-433c-bc00-9121a45aa515","Type":"ContainerStarted","Data":"c313e656890a11fee3ecb133b8c052827b22699f41ea38612bd581629105bf0e"} Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.964921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" event={"ID":"b06f9eba-0e3d-47fb-a386-a166987e78fd","Type":"ContainerStarted","Data":"9a3a10624151d9673d31391fff8105e05226e17aa8e1b700d2dd15cc01de39b9"} Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.979394 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" podStartSLOduration=128.97937452 podStartE2EDuration="2m8.97937452s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.975740427 +0000 UTC m=+154.278254816" watchObservedRunningTime="2026-02-02 13:03:33.97937452 +0000 UTC m=+154.281888909" Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.982431 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kq22p" event={"ID":"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5","Type":"ContainerStarted","Data":"daf20f88f77c8ece317b4aa2425aba24b61b99d07deb21c59cb90fffbfa4c955"} Feb 02 13:03:33 crc kubenswrapper[4721]: I0202 13:03:33.991393 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" event={"ID":"2c9074bc-889d-4ce7-a250-6fc5984703e0","Type":"ContainerStarted","Data":"420f57653af414c784badc1fa24ed323a8bb52594720023f0b3bf03137ac12b0"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.021056 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" event={"ID":"32b31753-6a52-4364-b01f-9d50aeac7c13","Type":"ContainerStarted","Data":"16089a7aa7a271649dfcf92598ea5774a7dc62f01512306dffc4a2d67486cf7c"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.026996 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" event={"ID":"61358eab-20de-46bb-9701-dc736e6eb5ff","Type":"ContainerStarted","Data":"cdc58e2761d43372f1d268672fb769c108a9bc911c5903744f58b81765d9741f"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.028355 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.034403 4721 patch_prober.go:28] interesting pod/console-operator-58897d9998-f4l8v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.034436 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" podUID="61358eab-20de-46bb-9701-dc736e6eb5ff" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.036799 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" event={"ID":"3bbe9190-62bb-4079-afa7-adc9e970eae6","Type":"ContainerStarted","Data":"222b5aac1078254682082aa0f7d10df6cb374707e022072e262ca5b6a68337a8"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.047513 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.047888 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.54787529 +0000 UTC m=+154.850389689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.060765 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" event={"ID":"64fc7a32-5852-4e03-b1b7-1663f7f52b65","Type":"ContainerStarted","Data":"cabf915466d27de266f83931f8e2f058da9b66dbd2cfd11eccef4fd3d8d537d9"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.072279 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" podStartSLOduration=129.072255551 podStartE2EDuration="2m9.072255551s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:34.070191613 +0000 UTC m=+154.372706012" watchObservedRunningTime="2026-02-02 13:03:34.072255551 +0000 UTC m=+154.374769950" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.074317 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbv2j" podStartSLOduration=129.074306299 podStartE2EDuration="2m9.074306299s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:33.998264235 +0000 UTC m=+154.300778624" watchObservedRunningTime="2026-02-02 13:03:34.074306299 +0000 UTC m=+154.376820688" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.075518 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" event={"ID":"75fc13c2-ccc5-46a0-8a65-d6bc5340baab","Type":"ContainerStarted","Data":"4205ae6283ef57c9351203b27201b46a698d790ed7fdc44a4f679e9a157827fb"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.106522 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" event={"ID":"a1dde568-291e-40bf-9df7-18cd5449d0aa","Type":"ContainerStarted","Data":"24cc7e320cb8aa5dd535e97478c79d830906512c948d824576751370851008a0"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.123691 4721 generic.go:334] "Generic (PLEG): container finished" podID="ab4249b9-1751-45d6-be3f-58668c4542bd" containerID="9736769bdc705fc91b1d3ed306f50b43c662f0f5c195d71dedfa10fcfe69e90f" exitCode=0 Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.123758 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" event={"ID":"ab4249b9-1751-45d6-be3f-58668c4542bd","Type":"ContainerDied","Data":"9736769bdc705fc91b1d3ed306f50b43c662f0f5c195d71dedfa10fcfe69e90f"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.135921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dsnx" event={"ID":"ae3f417e-2bae-44dd-973f-5314b6f64972","Type":"ContainerStarted","Data":"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.137707 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" event={"ID":"70d74e61-4d44-4a6c-8a14-16e131d79e47","Type":"ContainerStarted","Data":"dbd4c9fba01b5efdc30190a223c20ae18c09fc0a5fc704cc9937daff3e4fce36"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.138672 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" event={"ID":"4a636bbb-70b8-4b2a-96c6-94f9edba40cc","Type":"ContainerStarted","Data":"0cf5199ed100e3e98a5cc08713ff630ccd68a9a593e15ef496c1e86b0c0a6e54"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.147980 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2rjrk" podStartSLOduration=129.147966605 podStartE2EDuration="2m9.147966605s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:34.121056323 +0000 UTC m=+154.423570722" watchObservedRunningTime="2026-02-02 13:03:34.147966605 +0000 UTC m=+154.450480994" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.149811 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.152360 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.65233386 +0000 UTC m=+154.954848249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.164308 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" event={"ID":"f028d7b1-e474-45f8-9c4e-d1b2322175c7","Type":"ContainerStarted","Data":"cc6bc4adfcd7a74fe555803cac666054d590cd120b7c6deceeebb28df5ac858c"} Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.165258 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-zt9ng container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.165296 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zt9ng" podUID="af02a63f-5e62-47ff-baf5-1dc1e95dc1ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.177755 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-csktx"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.180556 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.190445 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrfhj" podStartSLOduration=129.190429679 podStartE2EDuration="2m9.190429679s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:34.189684017 +0000 UTC m=+154.492198406" watchObservedRunningTime="2026-02-02 13:03:34.190429679 +0000 UTC m=+154.492944068" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.250927 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2dsnx" podStartSLOduration=129.250911502 podStartE2EDuration="2m9.250911502s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:34.233617792 +0000 UTC m=+154.536132181" watchObservedRunningTime="2026-02-02 13:03:34.250911502 +0000 UTC m=+154.553425891" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.252475 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-c958f"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.255409 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.275372 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.775352564 +0000 UTC m=+155.077866953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.296885 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7gtg4"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.302272 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.307097 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kh9ph"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.351458 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-td8sr" podStartSLOduration=129.351430779 podStartE2EDuration="2m9.351430779s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:34.315756048 +0000 UTC m=+154.618270447" watchObservedRunningTime="2026-02-02 13:03:34.351430779 +0000 UTC m=+154.653945168" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.357172 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.357348 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.857320606 +0000 UTC m=+155.159834995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.357551 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.357886 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.857879581 +0000 UTC m=+155.160393970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.364835 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.399262 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r47km"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.401527 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.403129 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.404427 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:34 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:34 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:34 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.404463 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.450727 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c68d5"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.461948 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk"] Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.462736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.463139 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:34.963124663 +0000 UTC m=+155.265639042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: W0202 13:03:34.484871 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6104d27e_fefa_4e2a_9b9e_62013c96f664.slice/crio-c5040c8797ef39726b4af64827b6a2417ffe50aa5ddb4db88a844c65aeb34b5b WatchSource:0}: Error finding container c5040c8797ef39726b4af64827b6a2417ffe50aa5ddb4db88a844c65aeb34b5b: Status 404 returned error can't find the container with id c5040c8797ef39726b4af64827b6a2417ffe50aa5ddb4db88a844c65aeb34b5b Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.487755 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg"] Feb 02 13:03:34 crc kubenswrapper[4721]: W0202 13:03:34.528471 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f85b66_5a30_4bef_909c_26750b18e72d.slice/crio-a8c8c985d386766f9c9bda3485f99c69a2946aa1ec8a32de2d8987042f9826c8 WatchSource:0}: Error finding container a8c8c985d386766f9c9bda3485f99c69a2946aa1ec8a32de2d8987042f9826c8: Status 404 returned error can't find the container with id a8c8c985d386766f9c9bda3485f99c69a2946aa1ec8a32de2d8987042f9826c8 Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.564637 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.565077 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.06504672 +0000 UTC m=+155.367561109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.665998 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.666670 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.166651978 +0000 UTC m=+155.469166377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.767690 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.772434 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.272395083 +0000 UTC m=+155.574909492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.870357 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.870737 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.370722669 +0000 UTC m=+155.673237048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.979726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:34 crc kubenswrapper[4721]: E0202 13:03:34.980087 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.480056376 +0000 UTC m=+155.782570765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.980524 4721 csr.go:261] certificate signing request csr-46nkk is approved, waiting to be issued Feb 02 13:03:34 crc kubenswrapper[4721]: I0202 13:03:34.995844 4721 csr.go:257] certificate signing request csr-46nkk is issued Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.080378 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.080653 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.580639755 +0000 UTC m=+155.883154144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.176712 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" event={"ID":"873a8c0c-9da4-4619-9ebf-7a327eb22b7e","Type":"ContainerStarted","Data":"e5df09a9819f06ce199926abbd018fc8f1a3ae0cbae66765ccb3d7e1c3a1a81c"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.176757 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" event={"ID":"873a8c0c-9da4-4619-9ebf-7a327eb22b7e","Type":"ContainerStarted","Data":"a3ba63e33fb814de0a45f3a1bb2d277752ed10e110e4a55af2fb1ec65495a8cc"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.185554 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.186041 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.68602415 +0000 UTC m=+155.988538539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.189909 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" event={"ID":"a0a1d56e-00d0-4e88-bdfb-461578e327e6","Type":"ContainerStarted","Data":"c3b326b65fcfbf8fe7658c5e452941da2a6ae05b45d0bb4cab349594633e9799"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.195369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" event={"ID":"6dc5b220-3e84-4a0a-9f7a-f27a007436f6","Type":"ContainerStarted","Data":"89c981d0d28206453ee7b4dd2abc982c40975e7768fcebc94fb10e119811b66b"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.208565 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" podStartSLOduration=130.208545528 podStartE2EDuration="2m10.208545528s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.207441057 +0000 UTC m=+155.509955446" watchObservedRunningTime="2026-02-02 13:03:35.208545528 +0000 UTC m=+155.511059917" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.209669 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" event={"ID":"46f85b66-5a30-4bef-909c-26750b18e72d","Type":"ContainerStarted","Data":"c99ca5207922d482d12e385981ae8866eee591181a2eadd21b9431af1880d536"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.209701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" event={"ID":"46f85b66-5a30-4bef-909c-26750b18e72d","Type":"ContainerStarted","Data":"a8c8c985d386766f9c9bda3485f99c69a2946aa1ec8a32de2d8987042f9826c8"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.222544 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" event={"ID":"ab926544-a708-445a-aaf9-0e3ad4593676","Type":"ContainerStarted","Data":"3a484d5baeedc1c3a99091c629d18674bdc023dba7fc279b1b427b38d4511479"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.222616 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" event={"ID":"ab926544-a708-445a-aaf9-0e3ad4593676","Type":"ContainerStarted","Data":"eee1366e170e03c54c022bcac6ae246546ca4799522f5652a6e19c5f099c5abf"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.230685 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" event={"ID":"ab4249b9-1751-45d6-be3f-58668c4542bd","Type":"ContainerStarted","Data":"fca0e3a431ad3dc1fd30cd813ef83109ac7dbc0572362f94da1481a13f4a7825"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.231452 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdzwk" podStartSLOduration=130.231435896 podStartE2EDuration="2m10.231435896s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.229031429 +0000 UTC m=+155.531545828" watchObservedRunningTime="2026-02-02 13:03:35.231435896 +0000 UTC m=+155.533950285" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.246641 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" event={"ID":"2c9074bc-889d-4ce7-a250-6fc5984703e0","Type":"ContainerStarted","Data":"6e95f003df211d09b9562e86431541c3b7c3e84c41d01ea470d07b5cb914180b"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.246678 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.248675 4721 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zcf44 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.248768 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.255658 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-r47km" event={"ID":"37b1d658-0b12-4afd-9b42-9f54f553d432","Type":"ContainerStarted","Data":"15d3f8ae4d864c8501535fef147df2c2bf89fadd84ee90b3b2d6e8a544d6d5c5"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.255707 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-r47km" event={"ID":"37b1d658-0b12-4afd-9b42-9f54f553d432","Type":"ContainerStarted","Data":"3c3b769c8caaa9dd65a847a67f93dcdcb50715c187002d82bf642870ae1ad65f"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.290941 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" event={"ID":"64fc7a32-5852-4e03-b1b7-1663f7f52b65","Type":"ContainerStarted","Data":"744cb2b7251373e328f634242bd5951588b52ac275861886686451996b57f157"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.296242 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.303970 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.80392345 +0000 UTC m=+156.106437839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.304428 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.304782 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.804760934 +0000 UTC m=+156.107275323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.305852 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-csktx" podStartSLOduration=129.305832404 podStartE2EDuration="2m9.305832404s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.269143854 +0000 UTC m=+155.571658243" watchObservedRunningTime="2026-02-02 13:03:35.305832404 +0000 UTC m=+155.608346793" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.307432 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" podStartSLOduration=130.307424259 podStartE2EDuration="2m10.307424259s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.304803655 +0000 UTC m=+155.607318054" watchObservedRunningTime="2026-02-02 13:03:35.307424259 +0000 UTC m=+155.609938648" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.321176 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kq22p" event={"ID":"53727b9f-9f5f-4f6e-8fa2-a6018c8225f5","Type":"ContainerStarted","Data":"5161b213f903145ea88c0c0561cb8ec55279a296067271b5a1e33b49be714025"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.338963 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-r47km" podStartSLOduration=129.338946052 podStartE2EDuration="2m9.338946052s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.337795569 +0000 UTC m=+155.640309958" watchObservedRunningTime="2026-02-02 13:03:35.338946052 +0000 UTC m=+155.641460441" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.360236 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" event={"ID":"32b31753-6a52-4364-b01f-9d50aeac7c13","Type":"ContainerStarted","Data":"4ff2c99b0f1215e20b5ea256b46b69f44328458fd8c75861e1839a1eeeb1f151"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.373303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" event={"ID":"c8c9b912-fd75-4f62-bb98-9a37bd5e7ebc","Type":"ContainerStarted","Data":"bcb3b94189b8f6b7f370b3ebefaa653af4936f6edb10b47441866b338bb5d34c"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.404388 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:35 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:35 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:35 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.404462 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.407833 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.408356 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.908282166 +0000 UTC m=+156.210796565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.408659 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.409164 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" podStartSLOduration=130.409117789 podStartE2EDuration="2m10.409117789s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.379632114 +0000 UTC m=+155.682146503" watchObservedRunningTime="2026-02-02 13:03:35.409117789 +0000 UTC m=+155.711632188" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.410631 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:35.910609792 +0000 UTC m=+156.213124181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.417292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" event={"ID":"6104d27e-fefa-4e2a-9b9e-62013c96f664","Type":"ContainerStarted","Data":"c5040c8797ef39726b4af64827b6a2417ffe50aa5ddb4db88a844c65aeb34b5b"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.424596 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-78nss" podStartSLOduration=130.424579717 podStartE2EDuration="2m10.424579717s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.421991214 +0000 UTC m=+155.724505603" watchObservedRunningTime="2026-02-02 13:03:35.424579717 +0000 UTC m=+155.727094116" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.432756 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" event={"ID":"3bbe9190-62bb-4079-afa7-adc9e970eae6","Type":"ContainerStarted","Data":"86fd6e993a06d4d91a16ddb7c3a024205e6c4608b70cb619dc6637169574ffab"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.466667 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" event={"ID":"debfaa13-5820-4570-a447-8ef48903144c","Type":"ContainerStarted","Data":"da3c6102abe819addd1522029e3af039e678a12c624cde39aed41a2c321e3ade"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.466708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" event={"ID":"debfaa13-5820-4570-a447-8ef48903144c","Type":"ContainerStarted","Data":"11762e7d97ad277d747cb94e2e100ba563f76d9f9f428ccfbc766da978836302"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.466718 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" event={"ID":"debfaa13-5820-4570-a447-8ef48903144c","Type":"ContainerStarted","Data":"5c48c7658ea296353132e081ad07eaec773355461c09ff0e4f19c84bf52e6705"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.467287 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.474088 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" podStartSLOduration=130.47405723 podStartE2EDuration="2m10.47405723s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.45538239 +0000 UTC m=+155.757896789" watchObservedRunningTime="2026-02-02 13:03:35.47405723 +0000 UTC m=+155.776571619" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.475223 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kq22p" podStartSLOduration=7.475217262 podStartE2EDuration="7.475217262s" podCreationTimestamp="2026-02-02 13:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.472962758 +0000 UTC m=+155.775477147" watchObservedRunningTime="2026-02-02 13:03:35.475217262 +0000 UTC m=+155.777731651" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.512361 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" event={"ID":"4a107c9e-3f0e-4d4b-8b02-65bda6793a2d","Type":"ContainerStarted","Data":"a6deb52a71363fd4f383c9ee9d1bd67067af795681ba66fe2ff0fb4273539d08"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.512831 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.513057 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.013040903 +0000 UTC m=+156.315555292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.513239 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.513492 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.013486236 +0000 UTC m=+156.316000625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.553483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" event={"ID":"0a98572f-0fab-4dab-9935-6bf52cdc7fff","Type":"ContainerStarted","Data":"ea214fd18bf76c462f5154b629db887d121191dc24151aed967735447e721aa2"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.559401 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" event={"ID":"bace5e2a-2c1a-433c-bc00-9121a45aa515","Type":"ContainerStarted","Data":"f2498a2579c452883ea2adc933b84935f1c81166a594286ae8b2ff491993bc1d"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.599009 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkplf" podStartSLOduration=130.598988888 podStartE2EDuration="2m10.598988888s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.512304073 +0000 UTC m=+155.814818462" watchObservedRunningTime="2026-02-02 13:03:35.598988888 +0000 UTC m=+155.901503277" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.616540 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.617539 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.117521713 +0000 UTC m=+156.420036112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.635442 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l49ld" podStartSLOduration=130.63542336 podStartE2EDuration="2m10.63542336s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.600395068 +0000 UTC m=+155.902909457" watchObservedRunningTime="2026-02-02 13:03:35.63542336 +0000 UTC m=+155.937937739" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.670763 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" event={"ID":"a1dde568-291e-40bf-9df7-18cd5449d0aa","Type":"ContainerStarted","Data":"4dfdfa3de3bd05c9ac15a39f27fdf71821bae33b58be473cf1629df4faa3b249"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.671078 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2jzch" podStartSLOduration=130.671048369 podStartE2EDuration="2m10.671048369s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.670503614 +0000 UTC m=+155.973017993" watchObservedRunningTime="2026-02-02 13:03:35.671048369 +0000 UTC m=+155.973562748" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.672116 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" podStartSLOduration=129.672111799 podStartE2EDuration="2m9.672111799s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.640829104 +0000 UTC m=+155.943343493" watchObservedRunningTime="2026-02-02 13:03:35.672111799 +0000 UTC m=+155.974626188" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.714562 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" event={"ID":"b06f9eba-0e3d-47fb-a386-a166987e78fd","Type":"ContainerStarted","Data":"df15f53ea93e89750e5f950d8d33fff9a9ea7f43c52a31c89a745f8551b59cde"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.715413 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.720465 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.721382 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.221365894 +0000 UTC m=+156.523880283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.734601 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.737059 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qr8r7" podStartSLOduration=130.737044878 podStartE2EDuration="2m10.737044878s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.709270362 +0000 UTC m=+156.011784751" watchObservedRunningTime="2026-02-02 13:03:35.737044878 +0000 UTC m=+156.039559287" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.741818 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" event={"ID":"c5b5487e-8a60-4967-b0f3-1d983c559f8a","Type":"ContainerStarted","Data":"05942b3e4764cef408eca65ef809cb3200501f49bf3319b23e4a403483cf33e1"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.741909 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" event={"ID":"c5b5487e-8a60-4967-b0f3-1d983c559f8a","Type":"ContainerStarted","Data":"b36b044de996d054bd6154612f956bcb38cfaee73c1f49bafe2226cf6641c478"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.742839 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.744760 4721 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-krxdl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.744797 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" podUID="c5b5487e-8a60-4967-b0f3-1d983c559f8a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.774640 4721 generic.go:334] "Generic (PLEG): container finished" podID="f2a74f12-1bed-4744-9dec-57282d5301eb" containerID="5d044c245a6bf58bd6b2e12977072e9547408c5abc9b4ec2e3376d67a3734b1c" exitCode=0 Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.774763 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" event={"ID":"f2a74f12-1bed-4744-9dec-57282d5301eb","Type":"ContainerDied","Data":"5d044c245a6bf58bd6b2e12977072e9547408c5abc9b4ec2e3376d67a3734b1c"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.778026 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" event={"ID":"cb2244ea-f203-4f66-9a4d-aad5e58a5c46","Type":"ContainerStarted","Data":"94760d753ba4419ca2c51d97fbed3f37ab09e4a9d8196aa4a68c6c65f2ebeeda"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.778091 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" event={"ID":"cb2244ea-f203-4f66-9a4d-aad5e58a5c46","Type":"ContainerStarted","Data":"c7ff8128edb569b53e1fd8002b5f74481b94875605c8b92ab77686d729b43f08"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.778762 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.780929 4721 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bz9nm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.781567 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" podUID="cb2244ea-f203-4f66-9a4d-aad5e58a5c46" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.812314 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hlnsv" podStartSLOduration=129.81229714 podStartE2EDuration="2m9.81229714s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.810558191 +0000 UTC m=+156.113072580" watchObservedRunningTime="2026-02-02 13:03:35.81229714 +0000 UTC m=+156.114811519" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.812582 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6wdm7" podStartSLOduration=130.812578278 podStartE2EDuration="2m10.812578278s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.73780842 +0000 UTC m=+156.040322809" watchObservedRunningTime="2026-02-02 13:03:35.812578278 +0000 UTC m=+156.115092667" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.823982 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.824178 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" event={"ID":"911d6570-6a65-42b8-a562-3e1ccdc8d562","Type":"ContainerStarted","Data":"f0dcb34fe6bc0451437480fb90656ae130cd5c8a2cc6405231d2e07c997c5770"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.824227 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" event={"ID":"911d6570-6a65-42b8-a562-3e1ccdc8d562","Type":"ContainerStarted","Data":"11bf8925ab36a7f8feacf8fe485e2f9f35eafcb7a10b06f41bc5045d87fc04e2"} Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.825364 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.325344959 +0000 UTC m=+156.627859408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.848244 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zg529" podStartSLOduration=130.848227728 podStartE2EDuration="2m10.848227728s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.847405135 +0000 UTC m=+156.149919524" watchObservedRunningTime="2026-02-02 13:03:35.848227728 +0000 UTC m=+156.150742117" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.876505 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kh9ph" event={"ID":"625494cc-c7b0-4a0a-811c-d4822b1c0acc","Type":"ContainerStarted","Data":"173ea934a18ec19b0c46c01ff6062ba029490ac9f3437ab721e44703df7dde5a"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.898890 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" podStartSLOduration=129.898859492 podStartE2EDuration="2m9.898859492s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.897982027 +0000 UTC m=+156.200496416" watchObservedRunningTime="2026-02-02 13:03:35.898859492 +0000 UTC m=+156.201373881" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.909588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" event={"ID":"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d","Type":"ContainerStarted","Data":"ab988754b924cad3030308810cdaff3015e74064772da97cdd5f52fe3ae386f0"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.909642 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" event={"ID":"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d","Type":"ContainerStarted","Data":"56cc9472fd9c8c633f44e42ebde3e87948267970cd566223980a9506e2168ebd"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.925440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:35 crc kubenswrapper[4721]: E0202 13:03:35.926577 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.426562187 +0000 UTC m=+156.729076576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.930294 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" event={"ID":"70d74e61-4d44-4a6c-8a14-16e131d79e47","Type":"ContainerStarted","Data":"6ded78ee58b89f7f490ce23e98391149b41fe9dd20d3ea356745be901f162a02"} Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.936058 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-zt9ng container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.936158 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zt9ng" podUID="af02a63f-5e62-47ff-baf5-1dc1e95dc1ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 13:03:35 crc kubenswrapper[4721]: I0202 13:03:35.950248 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" podStartSLOduration=129.950233508 podStartE2EDuration="2m9.950233508s" podCreationTimestamp="2026-02-02 13:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:35.948551589 +0000 UTC m=+156.251065978" watchObservedRunningTime="2026-02-02 13:03:35.950233508 +0000 UTC m=+156.252747897" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.000855 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 12:58:34 +0000 UTC, rotation deadline is 2026-12-02 17:01:42.055177192 +0000 UTC Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.000926 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7275h58m6.054254958s for next certificate rotation Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.018269 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" podStartSLOduration=131.018249194 podStartE2EDuration="2m11.018249194s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:36.017433501 +0000 UTC m=+156.319947890" watchObservedRunningTime="2026-02-02 13:03:36.018249194 +0000 UTC m=+156.320763583" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.030474 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.032975 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.532946871 +0000 UTC m=+156.835461260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.053333 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" podStartSLOduration=131.053317247 podStartE2EDuration="2m11.053317247s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:36.052516285 +0000 UTC m=+156.355030684" watchObservedRunningTime="2026-02-02 13:03:36.053317247 +0000 UTC m=+156.355831636" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.134434 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.134791 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.634778225 +0000 UTC m=+156.937292604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.235867 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.236340 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.736295991 +0000 UTC m=+157.038810380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.338003 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.338401 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.838386613 +0000 UTC m=+157.140901002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.379346 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.379544 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.395566 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:36 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:36 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:36 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.396017 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.438819 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.438982 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.938958202 +0000 UTC m=+157.241472591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.439080 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.439425 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:36.939417174 +0000 UTC m=+157.241931563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.540397 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.540609 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.040560319 +0000 UTC m=+157.343074708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.540899 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.541274 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.04125812 +0000 UTC m=+157.343772509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.642148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.642366 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.142330672 +0000 UTC m=+157.444845051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.642478 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.642831 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.142816156 +0000 UTC m=+157.445330545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.731490 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f4l8v" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.744012 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.744219 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.244188438 +0000 UTC m=+157.546702827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.744336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.744633 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.24462123 +0000 UTC m=+157.547135619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.781977 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-z25fz" podStartSLOduration=131.781959738 podStartE2EDuration="2m11.781959738s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:36.102346226 +0000 UTC m=+156.404860615" watchObservedRunningTime="2026-02-02 13:03:36.781959738 +0000 UTC m=+157.084474127" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.845706 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.845870 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.345843067 +0000 UTC m=+157.648357456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.846054 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.846366 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.346352822 +0000 UTC m=+157.648867211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.940173 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" event={"ID":"a0a1d56e-00d0-4e88-bdfb-461578e327e6","Type":"ContainerStarted","Data":"4aaf11806c83c9a8c17f1f6647fcc6636c08cfc5d59a93259cf0b1eb1e64c3ee"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.944768 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-c958f" event={"ID":"911d6570-6a65-42b8-a562-3e1ccdc8d562","Type":"ContainerStarted","Data":"6c70bed4abe3f6f707760ed133feebd0cc860c4122dfc398c48b54a4187ceeb1"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.946213 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" event={"ID":"6dc5b220-3e84-4a0a-9f7a-f27a007436f6","Type":"ContainerStarted","Data":"9da17c9d0725f02292c51b99e9e26c02584a85efbda4a2be69a38b3eae85296c"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.946538 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.946665 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.446643832 +0000 UTC m=+157.749158221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.946774 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:36 crc kubenswrapper[4721]: E0202 13:03:36.947110 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.447094826 +0000 UTC m=+157.749609215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.954628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kh9ph" event={"ID":"625494cc-c7b0-4a0a-811c-d4822b1c0acc","Type":"ContainerStarted","Data":"a7fc4d052f1e7098b1ae4f2ee53b969db52a7df9be37565a5fe65ba3df3a7323"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.954675 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kh9ph" event={"ID":"625494cc-c7b0-4a0a-811c-d4822b1c0acc","Type":"ContainerStarted","Data":"4aa2069f7abf95d557c6f6e51977472c9e73803c17d0fc953e81a07accab08ed"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.954788 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.962876 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f5gbr" event={"ID":"2de0eb97-f51f-4468-9a68-eb9d6a7ce40d","Type":"ContainerStarted","Data":"f4cab96224bddd5a8a888b08f3b2b58f17ef1fcad2c8e71a3c9833a79c5f7742"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.964644 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" event={"ID":"6104d27e-fefa-4e2a-9b9e-62013c96f664","Type":"ContainerStarted","Data":"db700c1a357f0e0b7fddce98c406153a2f44ea1407caeae97c74bd1efdb90a89"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.964697 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" event={"ID":"6104d27e-fefa-4e2a-9b9e-62013c96f664","Type":"ContainerStarted","Data":"b82a346e555b506db84a00ba966ded4574a5f93dde381f4739157aa3264abc19"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.966853 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" event={"ID":"f2a74f12-1bed-4744-9dec-57282d5301eb","Type":"ContainerStarted","Data":"b29303ec65ab709b7092cfc60d80e7ffbc12ce0f422d5144218d19a98595025f"} Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.968579 4721 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zcf44 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.968639 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.971408 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bz9nm" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.993845 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w9gmp" podStartSLOduration=131.993826219 podStartE2EDuration="2m11.993826219s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:36.972242688 +0000 UTC m=+157.274757077" watchObservedRunningTime="2026-02-02 13:03:36.993826219 +0000 UTC m=+157.296340608" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.994269 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kh9ph" podStartSLOduration=8.994264531 podStartE2EDuration="8.994264531s" podCreationTimestamp="2026-02-02 13:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:36.991685778 +0000 UTC m=+157.294200167" watchObservedRunningTime="2026-02-02 13:03:36.994264531 +0000 UTC m=+157.296778910" Feb 02 13:03:36 crc kubenswrapper[4721]: I0202 13:03:36.997326 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.048054 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.049698 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.549681801 +0000 UTC m=+157.852196190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.103557 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c68d5" podStartSLOduration=132.103537007 podStartE2EDuration="2m12.103537007s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:37.091936928 +0000 UTC m=+157.394451317" watchObservedRunningTime="2026-02-02 13:03:37.103537007 +0000 UTC m=+157.406051396" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.150851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.151329 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.65131446 +0000 UTC m=+157.953828849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.247713 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" podStartSLOduration=132.24769605 podStartE2EDuration="2m12.24769605s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:37.197503108 +0000 UTC m=+157.500017497" watchObservedRunningTime="2026-02-02 13:03:37.24769605 +0000 UTC m=+157.550210439" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.253497 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.253751 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.753738252 +0000 UTC m=+158.056252641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.354428 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.355130 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.855107673 +0000 UTC m=+158.157622062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.398869 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:37 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:37 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:37 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.399181 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.445911 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.455333 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.455616 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:37.95559961 +0000 UTC m=+158.258113999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.504211 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krxdl" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.556874 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.557226 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.057214248 +0000 UTC m=+158.359728637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.658876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.659058 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.159028542 +0000 UTC m=+158.461542931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.659188 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.659744 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.159728142 +0000 UTC m=+158.462242531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.756965 4721 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.760943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.761194 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.261152855 +0000 UTC m=+158.563667244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.761373 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.761811 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.261800474 +0000 UTC m=+158.564314853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.862238 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.862404 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.362373072 +0000 UTC m=+158.664887471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.862555 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.862842 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.362828975 +0000 UTC m=+158.665343364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.964089 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.964337 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.464300899 +0000 UTC m=+158.766815288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.964522 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:37 crc kubenswrapper[4721]: E0202 13:03:37.964902 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.464892247 +0000 UTC m=+158.767406636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.974488 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" event={"ID":"6dc5b220-3e84-4a0a-9f7a-f27a007436f6","Type":"ContainerStarted","Data":"3fa8c76009aefe1c90c623918380b724071582b4b9d3b5b3d8840a87e79ce2a8"} Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.974557 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" event={"ID":"6dc5b220-3e84-4a0a-9f7a-f27a007436f6","Type":"ContainerStarted","Data":"3fd192519c55df1c53965318280073b4ab1c355b4f16fb01f21868f30bc94dd3"} Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.974567 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" event={"ID":"6dc5b220-3e84-4a0a-9f7a-f27a007436f6","Type":"ContainerStarted","Data":"e26e926277d71aad91d7fb0a0961d04e8e299605fb44b3a37f74cbbca6b1a737"} Feb 02 13:03:37 crc kubenswrapper[4721]: I0202 13:03:37.982025 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p9lvq" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.007971 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7gtg4" podStartSLOduration=10.007951926 podStartE2EDuration="10.007951926s" podCreationTimestamp="2026-02-02 13:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:38.000212277 +0000 UTC m=+158.302726676" watchObservedRunningTime="2026-02-02 13:03:38.007951926 +0000 UTC m=+158.310466315" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.065411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.066197 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.566179955 +0000 UTC m=+158.868694344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.158166 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.159123 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.163173 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.166930 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.167371 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.667357261 +0000 UTC m=+158.969871650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.169336 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.268049 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.268318 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.76828679 +0000 UTC m=+159.070801189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.268834 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrj7\" (UniqueName: \"kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.269014 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.269194 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.269336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.269725 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.76971029 +0000 UTC m=+159.072224679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.355240 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.356768 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.358780 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.370519 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.370731 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.370846 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.870754562 +0000 UTC m=+159.173268941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.370967 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrj7\" (UniqueName: \"kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.371138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.371275 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.371473 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.371792 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.871775632 +0000 UTC m=+159.174290021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.371810 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.371887 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.393013 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrj7\" (UniqueName: \"kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7\") pod \"community-operators-ftf6s\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.396432 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:38 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:38 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:38 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.396504 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.472293 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.472398 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.972373201 +0000 UTC m=+159.274887590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.472541 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.472590 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmqw\" (UniqueName: \"kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.472638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.472691 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.473004 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 13:03:38.972993298 +0000 UTC m=+159.275507687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlhhk" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.474007 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.555199 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.556462 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.568865 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.573449 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.573740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.573793 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgmqw\" (UniqueName: \"kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.573838 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.574345 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: E0202 13:03:38.574426 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 13:03:39.074408852 +0000 UTC m=+159.376923241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.574673 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.607772 4721 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T13:03:37.757232754Z","Handler":null,"Name":""} Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.613693 4721 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.613747 4721 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.627731 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgmqw\" (UniqueName: \"kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw\") pod \"certified-operators-s2tcj\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.672762 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.677800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8cfp\" (UniqueName: \"kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.677866 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.677882 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.677924 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.683542 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.683578 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.748913 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.752155 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlhhk\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.774046 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.777669 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.778726 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.778987 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.779009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8cfp\" (UniqueName: \"kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.779206 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.779242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.779845 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.780984 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.805823 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8cfp\" (UniqueName: \"kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp\") pod \"community-operators-95btx\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.811860 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.818421 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.880046 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.880292 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.880333 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlcg\" (UniqueName: \"kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.945251 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.950997 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:03:38 crc kubenswrapper[4721]: W0202 13:03:38.959002 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc32d34a1_8dd8_435d_9491_748392c25b97.slice/crio-607b84235315a7323834248f190f48d1f32164d2db07ef1dcc79f0ce6457a6d0 WatchSource:0}: Error finding container 607b84235315a7323834248f190f48d1f32164d2db07ef1dcc79f0ce6457a6d0: Status 404 returned error can't find the container with id 607b84235315a7323834248f190f48d1f32164d2db07ef1dcc79f0ce6457a6d0 Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.981914 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.981955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.981978 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlcg\" (UniqueName: \"kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.982405 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.982871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.989476 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerStarted","Data":"c4262d6d2388653a81a9cb645a74803128568514778cf935a45ab36a4268cbc6"} Feb 02 13:03:38 crc kubenswrapper[4721]: I0202 13:03:38.989516 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerStarted","Data":"ff138d00e2dec0f6fe53dd62f78ed24adffd461fe550704795a81bdea55a7066"} Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.011615 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlcg\" (UniqueName: \"kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg\") pod \"certified-operators-hpqtk\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.016183 4721 generic.go:334] "Generic (PLEG): container finished" podID="873a8c0c-9da4-4619-9ebf-7a327eb22b7e" containerID="e5df09a9819f06ce199926abbd018fc8f1a3ae0cbae66765ccb3d7e1c3a1a81c" exitCode=0 Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.016272 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" event={"ID":"873a8c0c-9da4-4619-9ebf-7a327eb22b7e","Type":"ContainerDied","Data":"e5df09a9819f06ce199926abbd018fc8f1a3ae0cbae66765ccb3d7e1c3a1a81c"} Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.018827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerStarted","Data":"607b84235315a7323834248f190f48d1f32164d2db07ef1dcc79f0ce6457a6d0"} Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.030744 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fgg7h" Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.098692 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.112309 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.188348 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:03:39 crc kubenswrapper[4721]: W0202 13:03:39.224344 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3275da10_006d_43c9_bdd6_46282b8ac9d1.slice/crio-d1d7313e7e9488250223e5d45503d08f43af174101c199e67ce71331531c040b WatchSource:0}: Error finding container d1d7313e7e9488250223e5d45503d08f43af174101c199e67ce71331531c040b: Status 404 returned error can't find the container with id d1d7313e7e9488250223e5d45503d08f43af174101c199e67ce71331531c040b Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.310785 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:03:39 crc kubenswrapper[4721]: W0202 13:03:39.317428 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5702e4_96dd_479b_871a_d69bfdba91e1.slice/crio-cee4e973cd74c01297b1880257bab371fd0bb8e104dad9318438c355e90ddc2e WatchSource:0}: Error finding container cee4e973cd74c01297b1880257bab371fd0bb8e104dad9318438c355e90ddc2e: Status 404 returned error can't find the container with id cee4e973cd74c01297b1880257bab371fd0bb8e104dad9318438c355e90ddc2e Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.395201 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:39 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:39 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:39 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:39 crc kubenswrapper[4721]: I0202 13:03:39.395258 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.034655 4721 generic.go:334] "Generic (PLEG): container finished" podID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerID="c4262d6d2388653a81a9cb645a74803128568514778cf935a45ab36a4268cbc6" exitCode=0 Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.034728 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerDied","Data":"c4262d6d2388653a81a9cb645a74803128568514778cf935a45ab36a4268cbc6"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.037004 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" event={"ID":"e5f7f80a-15ef-47b9-9e1e-325066df7897","Type":"ContainerStarted","Data":"0bd3bc5f864672ee0d853f714bcf1118de69c8c71858f5b65ce239a92ed34811"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.037026 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" event={"ID":"e5f7f80a-15ef-47b9-9e1e-325066df7897","Type":"ContainerStarted","Data":"05e1b0050534ad29187ccb842c7d704d41289dc2c02dfe3c8fae4b1bff20a647"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.037594 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.037886 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.044103 4721 generic.go:334] "Generic (PLEG): container finished" podID="c32d34a1-8dd8-435d-9491-748392c25b97" containerID="70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d" exitCode=0 Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.044181 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerDied","Data":"70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.047964 4721 generic.go:334] "Generic (PLEG): container finished" podID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerID="e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0" exitCode=0 Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.048034 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerDied","Data":"e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.048058 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerStarted","Data":"d1d7313e7e9488250223e5d45503d08f43af174101c199e67ce71331531c040b"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.051424 4721 generic.go:334] "Generic (PLEG): container finished" podID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerID="7fd0cce05f07f725f6f879eba319ea2fa9bcc093917d1f8c17be638fe958ae8a" exitCode=0 Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.051667 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerDied","Data":"7fd0cce05f07f725f6f879eba319ea2fa9bcc093917d1f8c17be638fe958ae8a"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.051737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerStarted","Data":"cee4e973cd74c01297b1880257bab371fd0bb8e104dad9318438c355e90ddc2e"} Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.142787 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" podStartSLOduration=135.142746737 podStartE2EDuration="2m15.142746737s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:40.137147458 +0000 UTC m=+160.439661847" watchObservedRunningTime="2026-02-02 13:03:40.142746737 +0000 UTC m=+160.445261126" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.175383 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.175526 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.182006 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.320650 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.359700 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:03:40 crc kubenswrapper[4721]: E0202 13:03:40.359964 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873a8c0c-9da4-4619-9ebf-7a327eb22b7e" containerName="collect-profiles" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.359979 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="873a8c0c-9da4-4619-9ebf-7a327eb22b7e" containerName="collect-profiles" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.360416 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="873a8c0c-9da4-4619-9ebf-7a327eb22b7e" containerName="collect-profiles" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.361474 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.369312 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.376451 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.394827 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:40 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:40 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:40 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.394933 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409519 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume\") pod \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409628 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnzzr\" (UniqueName: \"kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr\") pod \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409669 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume\") pod \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\" (UID: \"873a8c0c-9da4-4619-9ebf-7a327eb22b7e\") " Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409833 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbmd\" (UniqueName: \"kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409898 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.409917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.410784 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "873a8c0c-9da4-4619-9ebf-7a327eb22b7e" (UID: "873a8c0c-9da4-4619-9ebf-7a327eb22b7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.416463 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr" (OuterVolumeSpecName: "kube-api-access-lnzzr") pod "873a8c0c-9da4-4619-9ebf-7a327eb22b7e" (UID: "873a8c0c-9da4-4619-9ebf-7a327eb22b7e"). InnerVolumeSpecName "kube-api-access-lnzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.423340 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "873a8c0c-9da4-4619-9ebf-7a327eb22b7e" (UID: "873a8c0c-9da4-4619-9ebf-7a327eb22b7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.427471 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511402 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsbmd\" (UniqueName: \"kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511495 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511517 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511584 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511598 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnzzr\" (UniqueName: \"kubernetes.io/projected/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-kube-api-access-lnzzr\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.511607 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/873a8c0c-9da4-4619-9ebf-7a327eb22b7e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.512039 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.512188 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.534930 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsbmd\" (UniqueName: \"kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd\") pod \"redhat-marketplace-75gx6\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.682248 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.774292 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.778427 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.783223 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.815136 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.815199 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.815231 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wc6\" (UniqueName: \"kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.870030 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.870966 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.875835 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.876107 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.885776 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.895146 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917408 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917451 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wc6\" (UniqueName: \"kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917525 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917554 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.917999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.918011 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:40 crc kubenswrapper[4721]: I0202 13:03:40.938277 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wc6\" (UniqueName: \"kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6\") pod \"redhat-marketplace-7jcv9\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.005844 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-zt9ng container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.005897 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zt9ng" podUID="af02a63f-5e62-47ff-baf5-1dc1e95dc1ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.005860 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-zt9ng container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.005952 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zt9ng" podUID="af02a63f-5e62-47ff-baf5-1dc1e95dc1ad" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.007375 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.008619 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.012196 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.012405 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.015103 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.018318 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.018385 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.018427 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.018468 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.018490 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.040364 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.065224 4721 generic.go:334] "Generic (PLEG): container finished" podID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerID="d98e8c3180feeb272dbc337ede325b3ee8bdf7c11b2445546d5a7351f1d071c3" exitCode=0 Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.065314 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerDied","Data":"d98e8c3180feeb272dbc337ede325b3ee8bdf7c11b2445546d5a7351f1d071c3"} Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.065342 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerStarted","Data":"5628d04181a13e0213caa7a951b015bba8003374b2bb6f608199a4eba95c3b17"} Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.067129 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" event={"ID":"873a8c0c-9da4-4619-9ebf-7a327eb22b7e","Type":"ContainerDied","Data":"a3ba63e33fb814de0a45f3a1bb2d277752ed10e110e4a55af2fb1ec65495a8cc"} Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.067160 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ba63e33fb814de0a45f3a1bb2d277752ed10e110e4a55af2fb1ec65495a8cc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.067236 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.072449 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kcw66" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.112622 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.113468 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.114334 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.115274 4721 patch_prober.go:28] interesting pod/console-f9d7485db-2dsnx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.115313 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2dsnx" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.121477 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.121618 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.121703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.179382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.206587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.323490 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.378746 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.395518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.396286 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.399912 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.449054 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.449323 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.449353 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk875\" (UniqueName: \"kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.451488 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.456357 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:41 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:41 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:41 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.456428 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.555743 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.555788 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk875\" (UniqueName: \"kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.555858 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.556385 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.556605 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.593349 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk875\" (UniqueName: \"kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875\") pod \"redhat-operators-pm5t7\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.641729 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.742603 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.752880 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.789138 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.790110 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.811567 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.834272 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.840128 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 13:03:41 crc kubenswrapper[4721]: W0202 13:03:41.892284 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb8d8781e_3259_4d55_b0d2_968979b5cd99.slice/crio-1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a WatchSource:0}: Error finding container 1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a: Status 404 returned error can't find the container with id 1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.962067 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.962193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:41 crc kubenswrapper[4721]: I0202 13:03:41.962236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zdz\" (UniqueName: \"kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.065411 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.065822 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.065854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zdz\" (UniqueName: \"kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.066772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.067182 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.099454 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerStarted","Data":"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e"} Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.099512 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerStarted","Data":"00041994ab43d697d41af55566c6a0ac8e00b0660330d7b33111225ed94d785c"} Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.100650 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zdz\" (UniqueName: \"kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz\") pod \"redhat-operators-xqxjr\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.115413 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8d8781e-3259-4d55-b0d2-968979b5cd99","Type":"ContainerStarted","Data":"1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a"} Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.117839 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"598f8872-99ec-4855-9124-07a34b4ceaf9","Type":"ContainerStarted","Data":"e681de4a8185193feb8aeae9c251e9c185dfaa6492e65f34c92fdd330062b5b0"} Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.211840 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.283363 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:03:42 crc kubenswrapper[4721]: W0202 13:03:42.373951 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97707af_edd5_4907_9459_615b32a005e6.slice/crio-731af821f39e85a65313814ab808bf2c6795132d15116e8c0a34a91225b2d5b6 WatchSource:0}: Error finding container 731af821f39e85a65313814ab808bf2c6795132d15116e8c0a34a91225b2d5b6: Status 404 returned error can't find the container with id 731af821f39e85a65313814ab808bf2c6795132d15116e8c0a34a91225b2d5b6 Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.396966 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:42 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:42 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:42 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.397044 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:42 crc kubenswrapper[4721]: I0202 13:03:42.761842 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.127790 4721 generic.go:334] "Generic (PLEG): container finished" podID="b97707af-edd5-4907-9459-615b32a005e6" containerID="41599e7535f02f311fe8e5965707307ae8f5502aec8ceadc6ba6ac29d4504579" exitCode=0 Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.128083 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerDied","Data":"41599e7535f02f311fe8e5965707307ae8f5502aec8ceadc6ba6ac29d4504579"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.128141 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerStarted","Data":"731af821f39e85a65313814ab808bf2c6795132d15116e8c0a34a91225b2d5b6"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.141820 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8d8781e-3259-4d55-b0d2-968979b5cd99","Type":"ContainerStarted","Data":"ad974937e7efafb3ad0f3ccfafa676f54039c93d4ff006428e8fe6f5b648894d"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.147188 4721 generic.go:334] "Generic (PLEG): container finished" podID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerID="1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995" exitCode=0 Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.147275 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerDied","Data":"1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.147307 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerStarted","Data":"47060e9c65a5aeb6ad4fedeb9a16d1cc215190986f8d21247996042fbd85459e"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.155185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"598f8872-99ec-4855-9124-07a34b4ceaf9","Type":"ContainerStarted","Data":"dcf732d3a734d4bb94ed49bdb0e62cb781c98472db6dbf2ecb3f859e26f8167a"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.161868 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.161847426 podStartE2EDuration="3.161847426s" podCreationTimestamp="2026-02-02 13:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:43.157921725 +0000 UTC m=+163.460436134" watchObservedRunningTime="2026-02-02 13:03:43.161847426 +0000 UTC m=+163.464361835" Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.174514 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerID="3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e" exitCode=0 Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.174588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerDied","Data":"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e"} Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.177331 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.177313534 podStartE2EDuration="3.177313534s" podCreationTimestamp="2026-02-02 13:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:03:43.175122633 +0000 UTC m=+163.477637032" watchObservedRunningTime="2026-02-02 13:03:43.177313534 +0000 UTC m=+163.479827933" Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.398112 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:43 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:43 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:43 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:43 crc kubenswrapper[4721]: I0202 13:03:43.398611 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.189029 4721 generic.go:334] "Generic (PLEG): container finished" podID="b8d8781e-3259-4d55-b0d2-968979b5cd99" containerID="ad974937e7efafb3ad0f3ccfafa676f54039c93d4ff006428e8fe6f5b648894d" exitCode=0 Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.189368 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8d8781e-3259-4d55-b0d2-968979b5cd99","Type":"ContainerDied","Data":"ad974937e7efafb3ad0f3ccfafa676f54039c93d4ff006428e8fe6f5b648894d"} Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.195400 4721 generic.go:334] "Generic (PLEG): container finished" podID="598f8872-99ec-4855-9124-07a34b4ceaf9" containerID="dcf732d3a734d4bb94ed49bdb0e62cb781c98472db6dbf2ecb3f859e26f8167a" exitCode=0 Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.195545 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"598f8872-99ec-4855-9124-07a34b4ceaf9","Type":"ContainerDied","Data":"dcf732d3a734d4bb94ed49bdb0e62cb781c98472db6dbf2ecb3f859e26f8167a"} Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.394661 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:44 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:44 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:44 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.394767 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.765053 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:03:44 crc kubenswrapper[4721]: I0202 13:03:44.765137 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.395271 4721 patch_prober.go:28] interesting pod/router-default-5444994796-7vhgv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 13:03:45 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Feb 02 13:03:45 crc kubenswrapper[4721]: [+]process-running ok Feb 02 13:03:45 crc kubenswrapper[4721]: healthz check failed Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.395342 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7vhgv" podUID="b2cdcb8a-fc4a-4f72-8a37-c7c0351ca2eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.736012 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.813756 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.844964 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir\") pod \"b8d8781e-3259-4d55-b0d2-968979b5cd99\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.845220 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access\") pod \"b8d8781e-3259-4d55-b0d2-968979b5cd99\" (UID: \"b8d8781e-3259-4d55-b0d2-968979b5cd99\") " Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.845115 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b8d8781e-3259-4d55-b0d2-968979b5cd99" (UID: "b8d8781e-3259-4d55-b0d2-968979b5cd99"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.845772 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8d8781e-3259-4d55-b0d2-968979b5cd99-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.861297 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b8d8781e-3259-4d55-b0d2-968979b5cd99" (UID: "b8d8781e-3259-4d55-b0d2-968979b5cd99"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.947101 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir\") pod \"598f8872-99ec-4855-9124-07a34b4ceaf9\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.947184 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access\") pod \"598f8872-99ec-4855-9124-07a34b4ceaf9\" (UID: \"598f8872-99ec-4855-9124-07a34b4ceaf9\") " Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.947493 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8d8781e-3259-4d55-b0d2-968979b5cd99-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.947882 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "598f8872-99ec-4855-9124-07a34b4ceaf9" (UID: "598f8872-99ec-4855-9124-07a34b4ceaf9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:03:45 crc kubenswrapper[4721]: I0202 13:03:45.954089 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "598f8872-99ec-4855-9124-07a34b4ceaf9" (UID: "598f8872-99ec-4855-9124-07a34b4ceaf9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.054158 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f8872-99ec-4855-9124-07a34b4ceaf9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.054202 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f8872-99ec-4855-9124-07a34b4ceaf9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.220782 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.220823 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8d8781e-3259-4d55-b0d2-968979b5cd99","Type":"ContainerDied","Data":"1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a"} Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.220884 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1082372af7a0adcfdeb24cb7893016cd78e285acfcc8814c2bd902fe4631a01a" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.225735 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"598f8872-99ec-4855-9124-07a34b4ceaf9","Type":"ContainerDied","Data":"e681de4a8185193feb8aeae9c251e9c185dfaa6492e65f34c92fdd330062b5b0"} Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.225772 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e681de4a8185193feb8aeae9c251e9c185dfaa6492e65f34c92fdd330062b5b0" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.225835 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.395317 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.404387 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7vhgv" Feb 02 13:03:46 crc kubenswrapper[4721]: I0202 13:03:46.852289 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kh9ph" Feb 02 13:03:48 crc kubenswrapper[4721]: I0202 13:03:48.116201 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:48 crc kubenswrapper[4721]: I0202 13:03:48.123615 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfab3ffb-8798-423d-9b55-83868b76a14e-metrics-certs\") pod \"network-metrics-daemon-xqz79\" (UID: \"bfab3ffb-8798-423d-9b55-83868b76a14e\") " pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:48 crc kubenswrapper[4721]: I0202 13:03:48.147089 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqz79" Feb 02 13:03:51 crc kubenswrapper[4721]: I0202 13:03:51.019764 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zt9ng" Feb 02 13:03:51 crc kubenswrapper[4721]: I0202 13:03:51.123348 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:51 crc kubenswrapper[4721]: I0202 13:03:51.127786 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:03:58 crc kubenswrapper[4721]: I0202 13:03:58.823309 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:04:07 crc kubenswrapper[4721]: I0202 13:04:07.444691 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 13:04:07 crc kubenswrapper[4721]: E0202 13:04:07.707421 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 13:04:07 crc kubenswrapper[4721]: E0202 13:04:07.707907 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srrj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ftf6s_openshift-marketplace(7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:04:07 crc kubenswrapper[4721]: E0202 13:04:07.709868 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ftf6s" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" Feb 02 13:04:11 crc kubenswrapper[4721]: E0202 13:04:11.352045 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ftf6s" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" Feb 02 13:04:11 crc kubenswrapper[4721]: I0202 13:04:11.722817 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2wzzt" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.540254 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.545062 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdlcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hpqtk_openshift-marketplace(5b5702e4-96dd-479b-871a-d69bfdba91e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.546305 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hpqtk" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.590905 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.591583 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgmqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s2tcj_openshift-marketplace(c32d34a1-8dd8-435d-9491-748392c25b97): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:04:13 crc kubenswrapper[4721]: E0202 13:04:13.592848 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s2tcj" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" Feb 02 13:04:13 crc kubenswrapper[4721]: I0202 13:04:13.974996 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqz79"] Feb 02 13:04:14 crc kubenswrapper[4721]: W0202 13:04:14.009440 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfab3ffb_8798_423d_9b55_83868b76a14e.slice/crio-887920ec11b80f3e5c08efc430c8a9513fd6b9bc0af16af51b3800b7e577e87c WatchSource:0}: Error finding container 887920ec11b80f3e5c08efc430c8a9513fd6b9bc0af16af51b3800b7e577e87c: Status 404 returned error can't find the container with id 887920ec11b80f3e5c08efc430c8a9513fd6b9bc0af16af51b3800b7e577e87c Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.158887 4721 generic.go:334] "Generic (PLEG): container finished" podID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerID="3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3" exitCode=0 Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.159237 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerDied","Data":"3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3"} Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.161992 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqz79" event={"ID":"bfab3ffb-8798-423d-9b55-83868b76a14e","Type":"ContainerStarted","Data":"887920ec11b80f3e5c08efc430c8a9513fd6b9bc0af16af51b3800b7e577e87c"} Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.168240 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerID="90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b" exitCode=0 Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.168314 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerDied","Data":"90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b"} Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.172585 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerStarted","Data":"0d83a50e35994a069f84ac2fbbbdd424065961585a6b4d7fa1391779a81dfd2b"} Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.175801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerStarted","Data":"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe"} Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.182680 4721 generic.go:334] "Generic (PLEG): container finished" podID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerID="e1bc2f4e0704a9ccfc7c0081b3e11299c3f9620d4305892b40a44031af50c66b" exitCode=0 Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.183392 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerDied","Data":"e1bc2f4e0704a9ccfc7c0081b3e11299c3f9620d4305892b40a44031af50c66b"} Feb 02 13:04:14 crc kubenswrapper[4721]: E0202 13:04:14.185215 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hpqtk" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" Feb 02 13:04:14 crc kubenswrapper[4721]: E0202 13:04:14.186810 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s2tcj" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.764164 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:04:14 crc kubenswrapper[4721]: I0202 13:04:14.764242 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.191978 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerStarted","Data":"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.200367 4721 generic.go:334] "Generic (PLEG): container finished" podID="b97707af-edd5-4907-9459-615b32a005e6" containerID="0d83a50e35994a069f84ac2fbbbdd424065961585a6b4d7fa1391779a81dfd2b" exitCode=0 Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.200488 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerDied","Data":"0d83a50e35994a069f84ac2fbbbdd424065961585a6b4d7fa1391779a81dfd2b"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.202434 4721 generic.go:334] "Generic (PLEG): container finished" podID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerID="6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe" exitCode=0 Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.202497 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerDied","Data":"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.209433 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerStarted","Data":"7fb035f08da78b4db85bc1ea0be2acf6c348cb105b55a68b2cac0ebf26b8bc1a"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.212421 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jcv9" podStartSLOduration=3.610797873 podStartE2EDuration="35.212393736s" podCreationTimestamp="2026-02-02 13:03:40 +0000 UTC" firstStartedPulling="2026-02-02 13:03:43.179492196 +0000 UTC m=+163.482006585" lastFinishedPulling="2026-02-02 13:04:14.781088059 +0000 UTC m=+195.083602448" observedRunningTime="2026-02-02 13:04:15.207408376 +0000 UTC m=+195.509922765" watchObservedRunningTime="2026-02-02 13:04:15.212393736 +0000 UTC m=+195.514908125" Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.213923 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerStarted","Data":"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.216555 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqz79" event={"ID":"bfab3ffb-8798-423d-9b55-83868b76a14e","Type":"ContainerStarted","Data":"9b833fb79bf3786baf344597d3cdbbaeab1780723f8978fbcfe0918a478d52e6"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.216593 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqz79" event={"ID":"bfab3ffb-8798-423d-9b55-83868b76a14e","Type":"ContainerStarted","Data":"2d59ca22026c845d2ad7159f1f4ba021a35d1cc4512b4a8368734bde90d61845"} Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.272954 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95btx" podStartSLOduration=2.649222793 podStartE2EDuration="37.272934401s" podCreationTimestamp="2026-02-02 13:03:38 +0000 UTC" firstStartedPulling="2026-02-02 13:03:40.050220456 +0000 UTC m=+160.352734845" lastFinishedPulling="2026-02-02 13:04:14.673932064 +0000 UTC m=+194.976446453" observedRunningTime="2026-02-02 13:04:15.262568978 +0000 UTC m=+195.565083367" watchObservedRunningTime="2026-02-02 13:04:15.272934401 +0000 UTC m=+195.575448790" Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.286790 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xqz79" podStartSLOduration=170.286771303 podStartE2EDuration="2m50.286771303s" podCreationTimestamp="2026-02-02 13:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:15.282985427 +0000 UTC m=+195.585499816" watchObservedRunningTime="2026-02-02 13:04:15.286771303 +0000 UTC m=+195.589285692" Feb 02 13:04:15 crc kubenswrapper[4721]: I0202 13:04:15.308681 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-75gx6" podStartSLOduration=1.661019316 podStartE2EDuration="35.308663574s" podCreationTimestamp="2026-02-02 13:03:40 +0000 UTC" firstStartedPulling="2026-02-02 13:03:41.067791781 +0000 UTC m=+161.370306170" lastFinishedPulling="2026-02-02 13:04:14.715436039 +0000 UTC m=+195.017950428" observedRunningTime="2026-02-02 13:04:15.306481721 +0000 UTC m=+195.608996120" watchObservedRunningTime="2026-02-02 13:04:15.308663574 +0000 UTC m=+195.611177963" Feb 02 13:04:16 crc kubenswrapper[4721]: I0202 13:04:16.225908 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerStarted","Data":"33ab21456de0805136a88c1d6ecd74979aabd0d3bfbd8cc958f6e35c18ed050d"} Feb 02 13:04:16 crc kubenswrapper[4721]: I0202 13:04:16.228086 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerStarted","Data":"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21"} Feb 02 13:04:16 crc kubenswrapper[4721]: I0202 13:04:16.276107 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pm5t7" podStartSLOduration=2.712960337 podStartE2EDuration="35.276045926s" podCreationTimestamp="2026-02-02 13:03:41 +0000 UTC" firstStartedPulling="2026-02-02 13:03:43.130838248 +0000 UTC m=+163.433352637" lastFinishedPulling="2026-02-02 13:04:15.693923837 +0000 UTC m=+195.996438226" observedRunningTime="2026-02-02 13:04:16.250381479 +0000 UTC m=+196.552895888" watchObservedRunningTime="2026-02-02 13:04:16.276045926 +0000 UTC m=+196.578560335" Feb 02 13:04:18 crc kubenswrapper[4721]: I0202 13:04:18.951997 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:18 crc kubenswrapper[4721]: I0202 13:04:18.952355 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:19 crc kubenswrapper[4721]: I0202 13:04:19.194170 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:19 crc kubenswrapper[4721]: I0202 13:04:19.212788 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqxjr" podStartSLOduration=6.809538319 podStartE2EDuration="38.212763223s" podCreationTimestamp="2026-02-02 13:03:41 +0000 UTC" firstStartedPulling="2026-02-02 13:03:44.198037058 +0000 UTC m=+164.500551437" lastFinishedPulling="2026-02-02 13:04:15.601261952 +0000 UTC m=+195.903776341" observedRunningTime="2026-02-02 13:04:16.276640903 +0000 UTC m=+196.579155302" watchObservedRunningTime="2026-02-02 13:04:19.212763223 +0000 UTC m=+199.515277622" Feb 02 13:04:19 crc kubenswrapper[4721]: I0202 13:04:19.308479 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.208817 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:04:20 crc kubenswrapper[4721]: E0202 13:04:20.209042 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598f8872-99ec-4855-9124-07a34b4ceaf9" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.209055 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="598f8872-99ec-4855-9124-07a34b4ceaf9" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: E0202 13:04:20.209085 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d8781e-3259-4d55-b0d2-968979b5cd99" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.209091 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d8781e-3259-4d55-b0d2-968979b5cd99" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.209201 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d8781e-3259-4d55-b0d2-968979b5cd99" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.209212 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="598f8872-99ec-4855-9124-07a34b4ceaf9" containerName="pruner" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.209560 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.213242 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.218061 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.226617 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.283166 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.283234 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.388738 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.388803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.388890 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.400834 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.412484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.534725 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.683271 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.683700 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:04:20 crc kubenswrapper[4721]: I0202 13:04:20.735929 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.011573 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.115381 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.115860 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.178953 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.273521 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"429dff39-5ef0-4e43-99bc-30771e26645d","Type":"ContainerStarted","Data":"b69839db7fc14233bb04f6125cec3d414151d8a54e4db751a1dca8cdc74f05a2"} Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.273924 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95btx" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="registry-server" containerID="cri-o://94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8" gracePeriod=2 Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.320922 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.367632 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.791305 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:04:21 crc kubenswrapper[4721]: I0202 13:04:21.791394 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.174095 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.212297 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.212353 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.250699 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.292449 4721 generic.go:334] "Generic (PLEG): container finished" podID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerID="94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8" exitCode=0 Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.292514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerDied","Data":"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8"} Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.292564 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95btx" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.292584 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95btx" event={"ID":"3275da10-006d-43c9-bdd6-46282b8ac9d1","Type":"ContainerDied","Data":"d1d7313e7e9488250223e5d45503d08f43af174101c199e67ce71331531c040b"} Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.292608 4721 scope.go:117] "RemoveContainer" containerID="94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.295320 4721 generic.go:334] "Generic (PLEG): container finished" podID="429dff39-5ef0-4e43-99bc-30771e26645d" containerID="f888fad76be81cd7d07bc8c827b1097ff3e61b82f112b31db96020604819b736" exitCode=0 Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.295396 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"429dff39-5ef0-4e43-99bc-30771e26645d","Type":"ContainerDied","Data":"f888fad76be81cd7d07bc8c827b1097ff3e61b82f112b31db96020604819b736"} Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.312846 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content\") pod \"3275da10-006d-43c9-bdd6-46282b8ac9d1\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.313096 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities\") pod \"3275da10-006d-43c9-bdd6-46282b8ac9d1\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.313283 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8cfp\" (UniqueName: \"kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp\") pod \"3275da10-006d-43c9-bdd6-46282b8ac9d1\" (UID: \"3275da10-006d-43c9-bdd6-46282b8ac9d1\") " Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.314249 4721 scope.go:117] "RemoveContainer" containerID="3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.315215 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities" (OuterVolumeSpecName: "utilities") pod "3275da10-006d-43c9-bdd6-46282b8ac9d1" (UID: "3275da10-006d-43c9-bdd6-46282b8ac9d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.320602 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp" (OuterVolumeSpecName: "kube-api-access-q8cfp") pod "3275da10-006d-43c9-bdd6-46282b8ac9d1" (UID: "3275da10-006d-43c9-bdd6-46282b8ac9d1"). InnerVolumeSpecName "kube-api-access-q8cfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.338183 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.353043 4721 scope.go:117] "RemoveContainer" containerID="e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.377224 4721 scope.go:117] "RemoveContainer" containerID="94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8" Feb 02 13:04:22 crc kubenswrapper[4721]: E0202 13:04:22.377842 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8\": container with ID starting with 94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8 not found: ID does not exist" containerID="94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.377885 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8"} err="failed to get container status \"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8\": rpc error: code = NotFound desc = could not find container \"94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8\": container with ID starting with 94276164198a8658973658b0a07a815f463d34dcaac89175ee34daf72aa614d8 not found: ID does not exist" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.377936 4721 scope.go:117] "RemoveContainer" containerID="3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3" Feb 02 13:04:22 crc kubenswrapper[4721]: E0202 13:04:22.378630 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3\": container with ID starting with 3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3 not found: ID does not exist" containerID="3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.378760 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3"} err="failed to get container status \"3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3\": rpc error: code = NotFound desc = could not find container \"3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3\": container with ID starting with 3a799bbd42f589f408396d566c14a41d1e41856170a302c97a49ff32d09288f3 not found: ID does not exist" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.378881 4721 scope.go:117] "RemoveContainer" containerID="e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0" Feb 02 13:04:22 crc kubenswrapper[4721]: E0202 13:04:22.379494 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0\": container with ID starting with e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0 not found: ID does not exist" containerID="e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.379519 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0"} err="failed to get container status \"e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0\": rpc error: code = NotFound desc = could not find container \"e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0\": container with ID starting with e6fbf9234354b68f2d074039ff57046d5bb0dc1205805a01d7f0f4b5a8698da0 not found: ID does not exist" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.390867 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3275da10-006d-43c9-bdd6-46282b8ac9d1" (UID: "3275da10-006d-43c9-bdd6-46282b8ac9d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.415528 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.415577 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8cfp\" (UniqueName: \"kubernetes.io/projected/3275da10-006d-43c9-bdd6-46282b8ac9d1-kube-api-access-q8cfp\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.415595 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275da10-006d-43c9-bdd6-46282b8ac9d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.611686 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.615687 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95btx"] Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.829282 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm5t7" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="registry-server" probeResult="failure" output=< Feb 02 13:04:22 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:04:22 crc kubenswrapper[4721]: > Feb 02 13:04:22 crc kubenswrapper[4721]: I0202 13:04:22.994889 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.305090 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7jcv9" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="registry-server" containerID="cri-o://c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563" gracePeriod=2 Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.610397 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.736522 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access\") pod \"429dff39-5ef0-4e43-99bc-30771e26645d\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.736623 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir\") pod \"429dff39-5ef0-4e43-99bc-30771e26645d\" (UID: \"429dff39-5ef0-4e43-99bc-30771e26645d\") " Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.736929 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "429dff39-5ef0-4e43-99bc-30771e26645d" (UID: "429dff39-5ef0-4e43-99bc-30771e26645d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.741562 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "429dff39-5ef0-4e43-99bc-30771e26645d" (UID: "429dff39-5ef0-4e43-99bc-30771e26645d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.837834 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/429dff39-5ef0-4e43-99bc-30771e26645d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:23 crc kubenswrapper[4721]: I0202 13:04:23.837868 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/429dff39-5ef0-4e43-99bc-30771e26645d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.252805 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.313031 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerID="c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563" exitCode=0 Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.313110 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerDied","Data":"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563"} Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.313133 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jcv9" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.313150 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jcv9" event={"ID":"d0f9579e-e58d-40b4-82c4-83111bfa9735","Type":"ContainerDied","Data":"00041994ab43d697d41af55566c6a0ac8e00b0660330d7b33111225ed94d785c"} Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.313169 4721 scope.go:117] "RemoveContainer" containerID="c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.315671 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerStarted","Data":"9828d087ebd0a3d61cf032280a26f38ac894879bfffd8ab8dc7b9c9e262b96fd"} Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.320430 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"429dff39-5ef0-4e43-99bc-30771e26645d","Type":"ContainerDied","Data":"b69839db7fc14233bb04f6125cec3d414151d8a54e4db751a1dca8cdc74f05a2"} Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.320492 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69839db7fc14233bb04f6125cec3d414151d8a54e4db751a1dca8cdc74f05a2" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.320612 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.338285 4721 scope.go:117] "RemoveContainer" containerID="90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.352556 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wc6\" (UniqueName: \"kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6\") pod \"d0f9579e-e58d-40b4-82c4-83111bfa9735\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.352674 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities\") pod \"d0f9579e-e58d-40b4-82c4-83111bfa9735\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.352793 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content\") pod \"d0f9579e-e58d-40b4-82c4-83111bfa9735\" (UID: \"d0f9579e-e58d-40b4-82c4-83111bfa9735\") " Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.353603 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities" (OuterVolumeSpecName: "utilities") pod "d0f9579e-e58d-40b4-82c4-83111bfa9735" (UID: "d0f9579e-e58d-40b4-82c4-83111bfa9735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.358258 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6" (OuterVolumeSpecName: "kube-api-access-24wc6") pod "d0f9579e-e58d-40b4-82c4-83111bfa9735" (UID: "d0f9579e-e58d-40b4-82c4-83111bfa9735"). InnerVolumeSpecName "kube-api-access-24wc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.363926 4721 scope.go:117] "RemoveContainer" containerID="3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.377166 4721 scope.go:117] "RemoveContainer" containerID="c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563" Feb 02 13:04:24 crc kubenswrapper[4721]: E0202 13:04:24.377555 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563\": container with ID starting with c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563 not found: ID does not exist" containerID="c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.377586 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563"} err="failed to get container status \"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563\": rpc error: code = NotFound desc = could not find container \"c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563\": container with ID starting with c8f57b53de24ab26aa99f7a4280707e1dd37483fb97ed167765b2081fa94f563 not found: ID does not exist" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.377608 4721 scope.go:117] "RemoveContainer" containerID="90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b" Feb 02 13:04:24 crc kubenswrapper[4721]: E0202 13:04:24.377860 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b\": container with ID starting with 90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b not found: ID does not exist" containerID="90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.377887 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b"} err="failed to get container status \"90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b\": rpc error: code = NotFound desc = could not find container \"90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b\": container with ID starting with 90052ca542d1d90d3ece1db6282e32c41dc45e759cbb4a25db16d8178731746b not found: ID does not exist" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.377908 4721 scope.go:117] "RemoveContainer" containerID="3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e" Feb 02 13:04:24 crc kubenswrapper[4721]: E0202 13:04:24.378214 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e\": container with ID starting with 3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e not found: ID does not exist" containerID="3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.378281 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e"} err="failed to get container status \"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e\": rpc error: code = NotFound desc = could not find container \"3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e\": container with ID starting with 3ccd0f09abd6b2021524950faafaef3b13ab04a32e2570fdc710dd7ecb17b03e not found: ID does not exist" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.383759 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0f9579e-e58d-40b4-82c4-83111bfa9735" (UID: "d0f9579e-e58d-40b4-82c4-83111bfa9735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.418154 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" path="/var/lib/kubelet/pods/3275da10-006d-43c9-bdd6-46282b8ac9d1/volumes" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.454223 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.454774 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wc6\" (UniqueName: \"kubernetes.io/projected/d0f9579e-e58d-40b4-82c4-83111bfa9735-kube-api-access-24wc6\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.454788 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9579e-e58d-40b4-82c4-83111bfa9735-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.627715 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.636845 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jcv9"] Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.793606 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:04:24 crc kubenswrapper[4721]: I0202 13:04:24.793961 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqxjr" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="registry-server" containerID="cri-o://d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21" gracePeriod=2 Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009180 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009401 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="extract-content" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009419 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="extract-content" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009432 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="extract-utilities" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009439 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="extract-utilities" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009447 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="extract-content" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009453 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="extract-content" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009466 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009472 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009485 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009490 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009499 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429dff39-5ef0-4e43-99bc-30771e26645d" containerName="pruner" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009505 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="429dff39-5ef0-4e43-99bc-30771e26645d" containerName="pruner" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.009514 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="extract-utilities" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009521 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="extract-utilities" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009606 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009616 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3275da10-006d-43c9-bdd6-46282b8ac9d1" containerName="registry-server" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009623 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="429dff39-5ef0-4e43-99bc-30771e26645d" containerName="pruner" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.009970 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.011949 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.015493 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.015937 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.161862 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.162229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.162318 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.198759 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263115 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8zdz\" (UniqueName: \"kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz\") pod \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263303 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content\") pod \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263339 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities\") pod \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\" (UID: \"fca822b5-78c8-47d9-9cc5-266118a2b5aa\") " Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263482 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263519 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.263651 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.265253 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities" (OuterVolumeSpecName: "utilities") pod "fca822b5-78c8-47d9-9cc5-266118a2b5aa" (UID: "fca822b5-78c8-47d9-9cc5-266118a2b5aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.268346 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.271358 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz" (OuterVolumeSpecName: "kube-api-access-t8zdz") pod "fca822b5-78c8-47d9-9cc5-266118a2b5aa" (UID: "fca822b5-78c8-47d9-9cc5-266118a2b5aa"). InnerVolumeSpecName "kube-api-access-t8zdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.287377 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access\") pod \"installer-9-crc\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.326991 4721 generic.go:334] "Generic (PLEG): container finished" podID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerID="9828d087ebd0a3d61cf032280a26f38ac894879bfffd8ab8dc7b9c9e262b96fd" exitCode=0 Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.327054 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerDied","Data":"9828d087ebd0a3d61cf032280a26f38ac894879bfffd8ab8dc7b9c9e262b96fd"} Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.331179 4721 generic.go:334] "Generic (PLEG): container finished" podID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerID="d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21" exitCode=0 Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.331226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerDied","Data":"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21"} Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.331270 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqxjr" event={"ID":"fca822b5-78c8-47d9-9cc5-266118a2b5aa","Type":"ContainerDied","Data":"47060e9c65a5aeb6ad4fedeb9a16d1cc215190986f8d21247996042fbd85459e"} Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.331258 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqxjr" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.331285 4721 scope.go:117] "RemoveContainer" containerID="d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.337796 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.357661 4721 scope.go:117] "RemoveContainer" containerID="6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.367276 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.367313 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8zdz\" (UniqueName: \"kubernetes.io/projected/fca822b5-78c8-47d9-9cc5-266118a2b5aa-kube-api-access-t8zdz\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.378539 4721 scope.go:117] "RemoveContainer" containerID="1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.402713 4721 scope.go:117] "RemoveContainer" containerID="d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.404269 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21\": container with ID starting with d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21 not found: ID does not exist" containerID="d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.404314 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21"} err="failed to get container status \"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21\": rpc error: code = NotFound desc = could not find container \"d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21\": container with ID starting with d88ecd85c5a2024374c4bb375130ae1821f595497ae59083522e66adc526cf21 not found: ID does not exist" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.404345 4721 scope.go:117] "RemoveContainer" containerID="6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.404653 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe\": container with ID starting with 6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe not found: ID does not exist" containerID="6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.404698 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe"} err="failed to get container status \"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe\": rpc error: code = NotFound desc = could not find container \"6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe\": container with ID starting with 6b1f19b1332f89efa38077451d6165f2371bd16012b96e8fb0038bfe0c3a6dbe not found: ID does not exist" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.404724 4721 scope.go:117] "RemoveContainer" containerID="1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995" Feb 02 13:04:25 crc kubenswrapper[4721]: E0202 13:04:25.404962 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995\": container with ID starting with 1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995 not found: ID does not exist" containerID="1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.404995 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995"} err="failed to get container status \"1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995\": rpc error: code = NotFound desc = could not find container \"1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995\": container with ID starting with 1307da490b649bddfe1be4c1fdffa3d95bafee24aa3239fe6da07477129dd995 not found: ID does not exist" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.582867 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 13:04:25 crc kubenswrapper[4721]: W0202 13:04:25.592516 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9755d24d_ee48_44a4_aa63_5b014999e3a9.slice/crio-77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb WatchSource:0}: Error finding container 77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb: Status 404 returned error can't find the container with id 77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.716330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fca822b5-78c8-47d9-9cc5-266118a2b5aa" (UID: "fca822b5-78c8-47d9-9cc5-266118a2b5aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.773764 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca822b5-78c8-47d9-9cc5-266118a2b5aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.958946 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:04:25 crc kubenswrapper[4721]: I0202 13:04:25.963011 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqxjr"] Feb 02 13:04:26 crc kubenswrapper[4721]: I0202 13:04:26.338524 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9755d24d-ee48-44a4-aa63-5b014999e3a9","Type":"ContainerStarted","Data":"ee2400de463f4d9467d730014ae34268bf8cc03fd437b95ba162e6c9f10ba80b"} Feb 02 13:04:26 crc kubenswrapper[4721]: I0202 13:04:26.338788 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9755d24d-ee48-44a4-aa63-5b014999e3a9","Type":"ContainerStarted","Data":"77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb"} Feb 02 13:04:26 crc kubenswrapper[4721]: I0202 13:04:26.424955 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f9579e-e58d-40b4-82c4-83111bfa9735" path="/var/lib/kubelet/pods/d0f9579e-e58d-40b4-82c4-83111bfa9735/volumes" Feb 02 13:04:26 crc kubenswrapper[4721]: I0202 13:04:26.426030 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" path="/var/lib/kubelet/pods/fca822b5-78c8-47d9-9cc5-266118a2b5aa/volumes" Feb 02 13:04:27 crc kubenswrapper[4721]: I0202 13:04:27.348042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerStarted","Data":"410edca835d21d18be323b803ab840df97552992c0ec9f405104a39dad3828e0"} Feb 02 13:04:27 crc kubenswrapper[4721]: I0202 13:04:27.365732 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.365711874 podStartE2EDuration="3.365711874s" podCreationTimestamp="2026-02-02 13:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:04:27.364280903 +0000 UTC m=+207.666795292" watchObservedRunningTime="2026-02-02 13:04:27.365711874 +0000 UTC m=+207.668226263" Feb 02 13:04:27 crc kubenswrapper[4721]: I0202 13:04:27.394433 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftf6s" podStartSLOduration=2.335025194 podStartE2EDuration="49.394406042s" podCreationTimestamp="2026-02-02 13:03:38 +0000 UTC" firstStartedPulling="2026-02-02 13:03:40.037664411 +0000 UTC m=+160.340178800" lastFinishedPulling="2026-02-02 13:04:27.097045239 +0000 UTC m=+207.399559648" observedRunningTime="2026-02-02 13:04:27.390222904 +0000 UTC m=+207.692737293" watchObservedRunningTime="2026-02-02 13:04:27.394406042 +0000 UTC m=+207.696920431" Feb 02 13:04:28 crc kubenswrapper[4721]: I0202 13:04:28.356628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerStarted","Data":"cac3571fa985c8eb2f6d12ace5635fa81b355f00311311849ccc78de7cbf1b4c"} Feb 02 13:04:28 crc kubenswrapper[4721]: I0202 13:04:28.474540 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:04:28 crc kubenswrapper[4721]: I0202 13:04:28.474633 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:04:29 crc kubenswrapper[4721]: I0202 13:04:29.366374 4721 generic.go:334] "Generic (PLEG): container finished" podID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerID="cac3571fa985c8eb2f6d12ace5635fa81b355f00311311849ccc78de7cbf1b4c" exitCode=0 Feb 02 13:04:29 crc kubenswrapper[4721]: I0202 13:04:29.366480 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerDied","Data":"cac3571fa985c8eb2f6d12ace5635fa81b355f00311311849ccc78de7cbf1b4c"} Feb 02 13:04:29 crc kubenswrapper[4721]: I0202 13:04:29.512579 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ftf6s" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="registry-server" probeResult="failure" output=< Feb 02 13:04:29 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:04:29 crc kubenswrapper[4721]: > Feb 02 13:04:31 crc kubenswrapper[4721]: I0202 13:04:31.835607 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:04:31 crc kubenswrapper[4721]: I0202 13:04:31.880925 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:04:32 crc kubenswrapper[4721]: I0202 13:04:32.383420 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerStarted","Data":"658375981ca92686d238c2129054336aaf19b4f61909f67142ef9963bb6dbfe9"} Feb 02 13:04:32 crc kubenswrapper[4721]: I0202 13:04:32.386396 4721 generic.go:334] "Generic (PLEG): container finished" podID="c32d34a1-8dd8-435d-9491-748392c25b97" containerID="cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76" exitCode=0 Feb 02 13:04:32 crc kubenswrapper[4721]: I0202 13:04:32.386465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerDied","Data":"cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76"} Feb 02 13:04:32 crc kubenswrapper[4721]: I0202 13:04:32.419681 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpqtk" podStartSLOduration=3.184057842 podStartE2EDuration="54.419663968s" podCreationTimestamp="2026-02-02 13:03:38 +0000 UTC" firstStartedPulling="2026-02-02 13:03:40.055242239 +0000 UTC m=+160.357756628" lastFinishedPulling="2026-02-02 13:04:31.290848365 +0000 UTC m=+211.593362754" observedRunningTime="2026-02-02 13:04:32.399107919 +0000 UTC m=+212.701622318" watchObservedRunningTime="2026-02-02 13:04:32.419663968 +0000 UTC m=+212.722178357" Feb 02 13:04:33 crc kubenswrapper[4721]: I0202 13:04:33.394078 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerStarted","Data":"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970"} Feb 02 13:04:33 crc kubenswrapper[4721]: I0202 13:04:33.411685 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2tcj" podStartSLOduration=2.669611636 podStartE2EDuration="55.41166968s" podCreationTimestamp="2026-02-02 13:03:38 +0000 UTC" firstStartedPulling="2026-02-02 13:03:40.045848963 +0000 UTC m=+160.348363352" lastFinishedPulling="2026-02-02 13:04:32.787907007 +0000 UTC m=+213.090421396" observedRunningTime="2026-02-02 13:04:33.411051403 +0000 UTC m=+213.713565792" watchObservedRunningTime="2026-02-02 13:04:33.41166968 +0000 UTC m=+213.714184059" Feb 02 13:04:38 crc kubenswrapper[4721]: I0202 13:04:38.511754 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:04:38 crc kubenswrapper[4721]: I0202 13:04:38.555249 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:04:38 crc kubenswrapper[4721]: I0202 13:04:38.674182 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:04:38 crc kubenswrapper[4721]: I0202 13:04:38.674290 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:04:38 crc kubenswrapper[4721]: I0202 13:04:38.725488 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:04:39 crc kubenswrapper[4721]: I0202 13:04:39.112861 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:39 crc kubenswrapper[4721]: I0202 13:04:39.112906 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:39 crc kubenswrapper[4721]: I0202 13:04:39.149110 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:39 crc kubenswrapper[4721]: I0202 13:04:39.463279 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:39 crc kubenswrapper[4721]: I0202 13:04:39.473089 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:04:40 crc kubenswrapper[4721]: I0202 13:04:40.318966 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqbhq"] Feb 02 13:04:40 crc kubenswrapper[4721]: I0202 13:04:40.743335 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:04:41 crc kubenswrapper[4721]: I0202 13:04:41.434535 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpqtk" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="registry-server" containerID="cri-o://658375981ca92686d238c2129054336aaf19b4f61909f67142ef9963bb6dbfe9" gracePeriod=2 Feb 02 13:04:42 crc kubenswrapper[4721]: I0202 13:04:42.443208 4721 generic.go:334] "Generic (PLEG): container finished" podID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerID="658375981ca92686d238c2129054336aaf19b4f61909f67142ef9963bb6dbfe9" exitCode=0 Feb 02 13:04:42 crc kubenswrapper[4721]: I0202 13:04:42.443311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerDied","Data":"658375981ca92686d238c2129054336aaf19b4f61909f67142ef9963bb6dbfe9"} Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.513973 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.603006 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities\") pod \"5b5702e4-96dd-479b-871a-d69bfdba91e1\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.603430 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content\") pod \"5b5702e4-96dd-479b-871a-d69bfdba91e1\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.603682 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdlcg\" (UniqueName: \"kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg\") pod \"5b5702e4-96dd-479b-871a-d69bfdba91e1\" (UID: \"5b5702e4-96dd-479b-871a-d69bfdba91e1\") " Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.603823 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities" (OuterVolumeSpecName: "utilities") pod "5b5702e4-96dd-479b-871a-d69bfdba91e1" (UID: "5b5702e4-96dd-479b-871a-d69bfdba91e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.604252 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.610054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg" (OuterVolumeSpecName: "kube-api-access-jdlcg") pod "5b5702e4-96dd-479b-871a-d69bfdba91e1" (UID: "5b5702e4-96dd-479b-871a-d69bfdba91e1"). InnerVolumeSpecName "kube-api-access-jdlcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.651356 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b5702e4-96dd-479b-871a-d69bfdba91e1" (UID: "5b5702e4-96dd-479b-871a-d69bfdba91e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.704997 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b5702e4-96dd-479b-871a-d69bfdba91e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:43 crc kubenswrapper[4721]: I0202 13:04:43.705032 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdlcg\" (UniqueName: \"kubernetes.io/projected/5b5702e4-96dd-479b-871a-d69bfdba91e1-kube-api-access-jdlcg\") on node \"crc\" DevicePath \"\"" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.454187 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpqtk" event={"ID":"5b5702e4-96dd-479b-871a-d69bfdba91e1","Type":"ContainerDied","Data":"cee4e973cd74c01297b1880257bab371fd0bb8e104dad9318438c355e90ddc2e"} Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.454284 4721 scope.go:117] "RemoveContainer" containerID="658375981ca92686d238c2129054336aaf19b4f61909f67142ef9963bb6dbfe9" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.454301 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpqtk" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.467789 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.474577 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpqtk"] Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.478481 4721 scope.go:117] "RemoveContainer" containerID="cac3571fa985c8eb2f6d12ace5635fa81b355f00311311849ccc78de7cbf1b4c" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.500538 4721 scope.go:117] "RemoveContainer" containerID="7fd0cce05f07f725f6f879eba319ea2fa9bcc093917d1f8c17be638fe958ae8a" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.763826 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.763890 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.763933 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.764490 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:04:44 crc kubenswrapper[4721]: I0202 13:04:44.764548 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591" gracePeriod=600 Feb 02 13:04:45 crc kubenswrapper[4721]: I0202 13:04:45.464813 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591" exitCode=0 Feb 02 13:04:45 crc kubenswrapper[4721]: I0202 13:04:45.465046 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591"} Feb 02 13:04:46 crc kubenswrapper[4721]: I0202 13:04:46.415803 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" path="/var/lib/kubelet/pods/5b5702e4-96dd-479b-871a-d69bfdba91e1/volumes" Feb 02 13:04:46 crc kubenswrapper[4721]: I0202 13:04:46.473292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe"} Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.799855 4721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.800900 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5" gracePeriod=15 Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.801048 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e" gracePeriod=15 Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.801107 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f" gracePeriod=15 Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.801139 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06" gracePeriod=15 Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.801167 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787" gracePeriod=15 Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.801890 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802272 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802286 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802297 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="extract-content" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802304 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="extract-content" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802319 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802349 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802364 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802371 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802383 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802390 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802399 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802405 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802432 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="extract-utilities" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802438 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="extract-utilities" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802447 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="extract-content" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802455 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="extract-content" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802463 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802469 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802476 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802503 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802517 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="extract-utilities" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802524 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="extract-utilities" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802530 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802538 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802724 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802764 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca822b5-78c8-47d9-9cc5-266118a2b5aa" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802777 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802785 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802831 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802848 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.802857 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5702e4-96dd-479b-871a-d69bfdba91e1" containerName="registry-server" Feb 02 13:05:03 crc kubenswrapper[4721]: E0202 13:05:03.802991 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.803001 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.803320 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.804357 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.805097 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.815976 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.842738 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859783 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859839 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859855 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859870 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859887 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.859939 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961259 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961319 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961345 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961365 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961394 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961427 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961463 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961456 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961521 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961575 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961512 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961614 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961717 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:03 crc kubenswrapper[4721]: I0202 13:05:03.961806 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.138554 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:05:04 crc kubenswrapper[4721]: W0202 13:05:04.156312 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bb9d2c698316f6af3480b619b2c5603728224c60aad632bbec2cec1c9b7ab9ae WatchSource:0}: Error finding container bb9d2c698316f6af3480b619b2c5603728224c60aad632bbec2cec1c9b7ab9ae: Status 404 returned error can't find the container with id bb9d2c698316f6af3480b619b2c5603728224c60aad632bbec2cec1c9b7ab9ae Feb 02 13:05:04 crc kubenswrapper[4721]: E0202 13:05:04.159439 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.247:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18906fbb6280333d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:05:04.158741309 +0000 UTC m=+244.461255698,LastTimestamp:2026-02-02 13:05:04.158741309 +0000 UTC m=+244.461255698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.588737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ed7c7e2a904888901a446739bb41ce209fe73c11466f51414b1b4e5af67c4884"} Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.588781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bb9d2c698316f6af3480b619b2c5603728224c60aad632bbec2cec1c9b7ab9ae"} Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.589817 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.591595 4721 generic.go:334] "Generic (PLEG): container finished" podID="9755d24d-ee48-44a4-aa63-5b014999e3a9" containerID="ee2400de463f4d9467d730014ae34268bf8cc03fd437b95ba162e6c9f10ba80b" exitCode=0 Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.591652 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9755d24d-ee48-44a4-aa63-5b014999e3a9","Type":"ContainerDied","Data":"ee2400de463f4d9467d730014ae34268bf8cc03fd437b95ba162e6c9f10ba80b"} Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.592086 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.592322 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.594401 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.595828 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.596552 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e" exitCode=0 Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.596580 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f" exitCode=0 Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.596614 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06" exitCode=0 Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.596626 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787" exitCode=2 Feb 02 13:05:04 crc kubenswrapper[4721]: I0202 13:05:04.596667 4721 scope.go:117] "RemoveContainer" containerID="db2bb86343542417950b2e7261cdad74cc3b3f05b9d471f222509715ac091a0d" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.353960 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerName="oauth-openshift" containerID="cri-o://81f1ed113ff28d261e45b8f089ff55331cfa48c5e28c0300ad3ede4e1aa70b95" gracePeriod=15 Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.609501 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.611962 4721 generic.go:334] "Generic (PLEG): container finished" podID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerID="81f1ed113ff28d261e45b8f089ff55331cfa48c5e28c0300ad3ede4e1aa70b95" exitCode=0 Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.612265 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" event={"ID":"962524c6-7992-43d5-a7f3-5fdd04297f24","Type":"ContainerDied","Data":"81f1ed113ff28d261e45b8f089ff55331cfa48c5e28c0300ad3ede4e1aa70b95"} Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.720672 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.726324 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.726708 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.726917 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.844906 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.845616 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.846105 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.846421 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.891695 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892249 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892344 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892380 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892260 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892406 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892463 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892505 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892546 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l4n2\" (UniqueName: \"kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892582 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892614 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892665 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892744 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.892825 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca\") pod \"962524c6-7992-43d5-a7f3-5fdd04297f24\" (UID: \"962524c6-7992-43d5-a7f3-5fdd04297f24\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.893287 4721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.895350 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.895999 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.899347 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.900806 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.902915 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.903254 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.912607 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2" (OuterVolumeSpecName: "kube-api-access-4l4n2") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "kube-api-access-4l4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.931615 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.932728 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.933683 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.934114 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.944311 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.947427 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "962524c6-7992-43d5-a7f3-5fdd04297f24" (UID: "962524c6-7992-43d5-a7f3-5fdd04297f24"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995005 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir\") pod \"9755d24d-ee48-44a4-aa63-5b014999e3a9\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995189 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9755d24d-ee48-44a4-aa63-5b014999e3a9" (UID: "9755d24d-ee48-44a4-aa63-5b014999e3a9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995222 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access\") pod \"9755d24d-ee48-44a4-aa63-5b014999e3a9\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995266 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock\") pod \"9755d24d-ee48-44a4-aa63-5b014999e3a9\" (UID: \"9755d24d-ee48-44a4-aa63-5b014999e3a9\") " Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995483 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock" (OuterVolumeSpecName: "var-lock") pod "9755d24d-ee48-44a4-aa63-5b014999e3a9" (UID: "9755d24d-ee48-44a4-aa63-5b014999e3a9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995524 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995641 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995669 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995686 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l4n2\" (UniqueName: \"kubernetes.io/projected/962524c6-7992-43d5-a7f3-5fdd04297f24-kube-api-access-4l4n2\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995700 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995717 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995730 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995748 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995759 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995772 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995784 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995795 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995807 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:05 crc kubenswrapper[4721]: I0202 13:05:05.995818 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/962524c6-7992-43d5-a7f3-5fdd04297f24-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.007864 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9755d24d-ee48-44a4-aa63-5b014999e3a9" (UID: "9755d24d-ee48-44a4-aa63-5b014999e3a9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.097782 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9755d24d-ee48-44a4-aa63-5b014999e3a9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.097835 4721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9755d24d-ee48-44a4-aa63-5b014999e3a9-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.189000 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.190349 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.191148 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.191636 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.192191 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.192518 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.300653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301062 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.300787 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301114 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301144 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301155 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301361 4721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301376 4721 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.301385 4721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.415237 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.623956 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.625147 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5" exitCode=0 Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.625233 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.625311 4721 scope.go:117] "RemoveContainer" containerID="15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.627151 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.627919 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628167 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628302 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628397 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628693 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628752 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" event={"ID":"962524c6-7992-43d5-a7f3-5fdd04297f24","Type":"ContainerDied","Data":"e1c6b11699215c240779ba4ffc084b0f044db3750d6c816f2d805a78f36b24e5"} Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.628859 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.629233 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.629646 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.630740 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.631017 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.632083 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.633160 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.633532 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9755d24d-ee48-44a4-aa63-5b014999e3a9","Type":"ContainerDied","Data":"77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb"} Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.633605 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77bc31793d5cf8c9c2a6f1e7dc74fa1124392a5cef395c56ee4646291d25f2eb" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.633564 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.635549 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.636378 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.636805 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.637111 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.637551 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.638107 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.638426 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.638685 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.644180 4721 scope.go:117] "RemoveContainer" containerID="63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.661650 4721 scope.go:117] "RemoveContainer" containerID="64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.677280 4721 scope.go:117] "RemoveContainer" containerID="f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.691085 4721 scope.go:117] "RemoveContainer" containerID="5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.708638 4721 scope.go:117] "RemoveContainer" containerID="465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.734578 4721 scope.go:117] "RemoveContainer" containerID="15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.735281 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\": container with ID starting with 15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e not found: ID does not exist" containerID="15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.735359 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e"} err="failed to get container status \"15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\": rpc error: code = NotFound desc = could not find container \"15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e\": container with ID starting with 15b88ed26cc224bfef04033a527dfc61b8cbb02ee20c8eeed61becaccc11026e not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.735410 4721 scope.go:117] "RemoveContainer" containerID="63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.736124 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\": container with ID starting with 63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f not found: ID does not exist" containerID="63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.736153 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f"} err="failed to get container status \"63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\": rpc error: code = NotFound desc = could not find container \"63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f\": container with ID starting with 63082a40cefc099dbdfa1d78cbb94c4c5e2f4be2bb97c3d95f8a0b198404811f not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.736170 4721 scope.go:117] "RemoveContainer" containerID="64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.736656 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\": container with ID starting with 64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06 not found: ID does not exist" containerID="64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.736747 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06"} err="failed to get container status \"64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\": rpc error: code = NotFound desc = could not find container \"64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06\": container with ID starting with 64df3044205a42c6ecd41f11a393c6bf1c2db8e9f91a451143d99a8ffb442f06 not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.736833 4721 scope.go:117] "RemoveContainer" containerID="f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.737267 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\": container with ID starting with f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787 not found: ID does not exist" containerID="f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.737327 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787"} err="failed to get container status \"f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\": rpc error: code = NotFound desc = could not find container \"f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787\": container with ID starting with f1623f9ea27423fd810955fe8cedbd4d7a6689b71b1639406f25d6173465f787 not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.737365 4721 scope.go:117] "RemoveContainer" containerID="5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.737791 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\": container with ID starting with 5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5 not found: ID does not exist" containerID="5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.737817 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5"} err="failed to get container status \"5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\": rpc error: code = NotFound desc = could not find container \"5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5\": container with ID starting with 5d3d2b7df6931f13d24fc821fc0893006a6501b48be95e1d99225c3862f842d5 not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.737840 4721 scope.go:117] "RemoveContainer" containerID="465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c" Feb 02 13:05:06 crc kubenswrapper[4721]: E0202 13:05:06.738181 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\": container with ID starting with 465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c not found: ID does not exist" containerID="465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.738214 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c"} err="failed to get container status \"465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\": rpc error: code = NotFound desc = could not find container \"465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c\": container with ID starting with 465986dbfff5026df7aa5a90471e79a8d3c101e0333d8065c1cffc6efa04015c not found: ID does not exist" Feb 02 13:05:06 crc kubenswrapper[4721]: I0202 13:05:06.738236 4721 scope.go:117] "RemoveContainer" containerID="81f1ed113ff28d261e45b8f089ff55331cfa48c5e28c0300ad3ede4e1aa70b95" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.668439 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.669228 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.669504 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.669779 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.670032 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:09 crc kubenswrapper[4721]: I0202 13:05:09.670057 4721 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.670334 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="200ms" Feb 02 13:05:09 crc kubenswrapper[4721]: E0202 13:05:09.871336 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="400ms" Feb 02 13:05:10 crc kubenswrapper[4721]: E0202 13:05:10.272667 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="800ms" Feb 02 13:05:10 crc kubenswrapper[4721]: I0202 13:05:10.413135 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:10 crc kubenswrapper[4721]: I0202 13:05:10.413888 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:10 crc kubenswrapper[4721]: I0202 13:05:10.414297 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:10 crc kubenswrapper[4721]: E0202 13:05:10.453553 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.247:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18906fbb6280333d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 13:05:04.158741309 +0000 UTC m=+244.461255698,LastTimestamp:2026-02-02 13:05:04.158741309 +0000 UTC m=+244.461255698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 13:05:11 crc kubenswrapper[4721]: E0202 13:05:11.073475 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="1.6s" Feb 02 13:05:12 crc kubenswrapper[4721]: E0202 13:05:12.674685 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="3.2s" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.409699 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.410687 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.410910 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.411420 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.425455 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.425850 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:15 crc kubenswrapper[4721]: E0202 13:05:15.426309 4721 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.426738 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.680058 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1c2e6eb6f67afe25e98ac97691c60139417e30f5b3496fe487baafd1770cd7e2"} Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.680131 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6e1767853e47e04cbbcbdebe254c65faac594cdfc7d46260a0341dbe5bf5a195"} Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.680419 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.680441 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:15 crc kubenswrapper[4721]: E0202 13:05:15.680829 4721 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.680885 4721 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.681246 4721 status_manager.go:851] "Failed to get status for pod" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: I0202 13:05:15.681699 4721 status_manager.go:851] "Failed to get status for pod" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" pod="openshift-authentication/oauth-openshift-558db77b4-fqbhq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fqbhq\": dial tcp 38.129.56.247:6443: connect: connection refused" Feb 02 13:05:15 crc kubenswrapper[4721]: E0202 13:05:15.875911 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.247:6443: connect: connection refused" interval="6.4s" Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689405 4721 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1c2e6eb6f67afe25e98ac97691c60139417e30f5b3496fe487baafd1770cd7e2" exitCode=0 Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689495 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1c2e6eb6f67afe25e98ac97691c60139417e30f5b3496fe487baafd1770cd7e2"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8feeaa9e603205dae86de9199b6132b1ba82ba117e8a4943cc3f2adc19175757"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689804 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e117ae8029ca07abd811aacfa5b46d99b19f0719fa797e122926af43114dbc40"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689814 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f75a7ea60003f2d17e897e77f706085400f675d9f5d74bb0b62ba9f9f02620da"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.689826 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d530490a660223c99ba2db0d44e273ac9ee98b2ce67ba36da894a4c22646431e"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.693011 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.693100 4721 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f" exitCode=1 Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.693141 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f"} Feb 02 13:05:16 crc kubenswrapper[4721]: I0202 13:05:16.693722 4721 scope.go:117] "RemoveContainer" containerID="35ad9a6eb5018e783ee5445aa61ea8e1d030f3fd0b1d6a3007ed1ed55b1b0f0f" Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.706551 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.706637 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7aabb411739ce80ab47c668025392aa9ac42d7b38f7c431572198e251ab9866"} Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.710600 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3e0bb60e40afe56a1ecd42e4dbeddf5e56a2c15f8bfaec75e826492cd09de47"} Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.710839 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.710851 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:17 crc kubenswrapper[4721]: I0202 13:05:17.711088 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:18 crc kubenswrapper[4721]: I0202 13:05:18.398398 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:05:20 crc kubenswrapper[4721]: I0202 13:05:20.427534 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:20 crc kubenswrapper[4721]: I0202 13:05:20.427916 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:20 crc kubenswrapper[4721]: I0202 13:05:20.433494 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:22 crc kubenswrapper[4721]: I0202 13:05:22.723219 4721 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:23 crc kubenswrapper[4721]: I0202 13:05:23.739714 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:23 crc kubenswrapper[4721]: I0202 13:05:23.740094 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:23 crc kubenswrapper[4721]: I0202 13:05:23.744171 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:23 crc kubenswrapper[4721]: I0202 13:05:23.746446 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4546d4e4-ab43-4ddd-8a17-31d5868b4a79" Feb 02 13:05:24 crc kubenswrapper[4721]: I0202 13:05:24.015632 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:05:24 crc kubenswrapper[4721]: I0202 13:05:24.019745 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:05:24 crc kubenswrapper[4721]: I0202 13:05:24.744473 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:24 crc kubenswrapper[4721]: I0202 13:05:24.744508 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:28 crc kubenswrapper[4721]: I0202 13:05:28.402642 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 13:05:28 crc kubenswrapper[4721]: I0202 13:05:28.957987 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 13:05:29 crc kubenswrapper[4721]: I0202 13:05:29.955639 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.044373 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.129059 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.427716 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4546d4e4-ab43-4ddd-8a17-31d5868b4a79" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.631046 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.674052 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.842355 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.850941 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 13:05:30 crc kubenswrapper[4721]: I0202 13:05:30.867355 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 13:05:31 crc kubenswrapper[4721]: I0202 13:05:31.269358 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 13:05:31 crc kubenswrapper[4721]: I0202 13:05:31.571574 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 13:05:31 crc kubenswrapper[4721]: I0202 13:05:31.637228 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 13:05:31 crc kubenswrapper[4721]: I0202 13:05:31.833258 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 13:05:32 crc kubenswrapper[4721]: I0202 13:05:32.075285 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 13:05:32 crc kubenswrapper[4721]: I0202 13:05:32.342396 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 13:05:32 crc kubenswrapper[4721]: I0202 13:05:32.362624 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 13:05:32 crc kubenswrapper[4721]: I0202 13:05:32.660403 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.095339 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.196650 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.280252 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.431746 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.482832 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.562277 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.686092 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.773138 4721 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.773235 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 13:05:34 crc kubenswrapper[4721]: I0202 13:05:34.942017 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.051049 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.329202 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.341102 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.447434 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.572232 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.812807 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 13:05:35 crc kubenswrapper[4721]: I0202 13:05:35.867820 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.000053 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.127241 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.220584 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.352485 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.582957 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.662372 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.667196 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.682974 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.822578 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.888387 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 13:05:36 crc kubenswrapper[4721]: I0202 13:05:36.948359 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.072697 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.111932 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.144529 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.155979 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.305906 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.486299 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.558523 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.619619 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.654752 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.793700 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.885467 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.895174 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 13:05:37 crc kubenswrapper[4721]: I0202 13:05:37.966033 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.042654 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.173699 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.197474 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.219971 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.252997 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.341547 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.436173 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.467131 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.468564 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.487166 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.654360 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.720369 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.734191 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.807709 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.893821 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 13:05:38 crc kubenswrapper[4721]: I0202 13:05:38.986912 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.162686 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.213806 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.229791 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.375259 4721 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.424015 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.493501 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.534408 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.607124 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.664693 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.728751 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.760330 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.769766 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.875013 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.945609 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:05:39 crc kubenswrapper[4721]: I0202 13:05:39.994454 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.031938 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.658514 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.729049 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.789587 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.949238 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.987512 4721 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 13:05:40 crc kubenswrapper[4721]: I0202 13:05:40.987849 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.004264 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.122391 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.159245 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.301949 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.359150 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.371343 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.478570 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.546268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.557825 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.579250 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.590927 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.606979 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.666025 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.689435 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.701058 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.729197 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.802556 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.811929 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.889395 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.890038 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 13:05:41 crc kubenswrapper[4721]: I0202 13:05:41.942577 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.149239 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.212913 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.235922 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.343316 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.614923 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.697429 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.705740 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.754414 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.788226 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.939766 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 13:05:42 crc kubenswrapper[4721]: I0202 13:05:42.956939 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.135045 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.195061 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.253914 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.425237 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.474388 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.500430 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.500630 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.662470 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.677022 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.725291 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.736858 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.806345 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.811239 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.830841 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.851732 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 13:05:43 crc kubenswrapper[4721]: I0202 13:05:43.875922 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.072201 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.074297 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.264173 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.351986 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.356766 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.397398 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.450290 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.451109 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.558016 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.653183 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.662619 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.679270 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.689859 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.721740 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.738592 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.774330 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.824269 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.934229 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 13:05:44 crc kubenswrapper[4721]: I0202 13:05:44.951366 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.037848 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.076754 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.126727 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.144005 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.167036 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.257113 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.286492 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.370874 4721 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.390634 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.540596 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.590334 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.652987 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.747744 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.748880 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.806772 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.848714 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.878621 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 13:05:45 crc kubenswrapper[4721]: I0202 13:05:45.931383 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.265801 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.339761 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.370433 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.412813 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.454433 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.574830 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.585088 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.620778 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.694299 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.866343 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.943590 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.992039 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 13:05:46 crc kubenswrapper[4721]: I0202 13:05:46.992859 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.004992 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.138590 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.193119 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.196432 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.260267 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.364852 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.487844 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.498944 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.810806 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.866868 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.875336 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.896107 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 13:05:47 crc kubenswrapper[4721]: I0202 13:05:47.947205 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.034081 4721 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.179366 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.272173 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.272182 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.299494 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.400172 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.464422 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.546741 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.569472 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.595118 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.608547 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.684516 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.793199 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.849244 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.906636 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 13:05:48 crc kubenswrapper[4721]: I0202 13:05:48.995475 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.365319 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.370720 4721 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.373162 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.373143571 podStartE2EDuration="46.373143571s" podCreationTimestamp="2026-02-02 13:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:22.49237574 +0000 UTC m=+262.794890129" watchObservedRunningTime="2026-02-02 13:05:49.373143571 +0000 UTC m=+289.675657990" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376285 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqbhq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376406 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-xl9jq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 13:05:49 crc kubenswrapper[4721]: E0202 13:05:49.376680 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerName="oauth-openshift" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376699 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerName="oauth-openshift" Feb 02 13:05:49 crc kubenswrapper[4721]: E0202 13:05:49.376716 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" containerName="installer" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376725 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" containerName="installer" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376833 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9755d24d-ee48-44a4-aa63-5b014999e3a9" containerName="installer" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.376847 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" containerName="oauth-openshift" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.377164 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.377218 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c0939893-cc01-45bf-844d-77d599d4d0a4" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.377449 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.379883 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.380906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.380962 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.381153 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.381440 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.381537 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.381714 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.381988 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.382132 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.382171 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.382412 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.382691 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.383384 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.385404 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.388818 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.392680 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.396434 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.430695 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.430603448 podStartE2EDuration="27.430603448s" podCreationTimestamp="2026-02-02 13:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:49.419568303 +0000 UTC m=+289.722082692" watchObservedRunningTime="2026-02-02 13:05:49.430603448 +0000 UTC m=+289.733117837" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.481099 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.528828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.528926 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.528967 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529412 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-policies\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529494 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529526 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529547 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529589 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529666 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-dir\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529768 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gjz\" (UniqueName: \"kubernetes.io/projected/a219d6f3-ca00-4d64-9283-25b7502567c1-kube-api-access-47gjz\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529862 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.529882 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.630891 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.631019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-policies\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.631050 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632021 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-policies\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632170 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632512 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632203 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632631 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632680 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-dir\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632712 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632765 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47gjz\" (UniqueName: \"kubernetes.io/projected/a219d6f3-ca00-4d64-9283-25b7502567c1-kube-api-access-47gjz\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632834 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632866 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.632892 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.633538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.633606 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a219d6f3-ca00-4d64-9283-25b7502567c1-audit-dir\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.634026 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.636436 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.638169 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.638432 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.638473 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.638594 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.638742 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.643274 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.643833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.644014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a219d6f3-ca00-4d64-9283-25b7502567c1-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.651767 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47gjz\" (UniqueName: \"kubernetes.io/projected/a219d6f3-ca00-4d64-9283-25b7502567c1-kube-api-access-47gjz\") pod \"oauth-openshift-76fc545986-xl9jq\" (UID: \"a219d6f3-ca00-4d64-9283-25b7502567c1\") " pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.697851 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.758628 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.761017 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 13:05:49 crc kubenswrapper[4721]: I0202 13:05:49.912664 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.099715 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-xl9jq"] Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.177918 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.413433 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.421180 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962524c6-7992-43d5-a7f3-5fdd04297f24" path="/var/lib/kubelet/pods/962524c6-7992-43d5-a7f3-5fdd04297f24/volumes" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.565314 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.656301 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.883977 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" event={"ID":"a219d6f3-ca00-4d64-9283-25b7502567c1","Type":"ContainerStarted","Data":"35ee26dd46aeb9522e621c0a21c0e31ab749f5b9c5838ddb9d93f5a7579ed1d2"} Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.884024 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" event={"ID":"a219d6f3-ca00-4d64-9283-25b7502567c1","Type":"ContainerStarted","Data":"60f69fc4db1373075ad2c602abd354831ef2372b4e324c7c6d804c6454e0bcc6"} Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.885008 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.890664 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" Feb 02 13:05:50 crc kubenswrapper[4721]: I0202 13:05:50.905161 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76fc545986-xl9jq" podStartSLOduration=70.9051428 podStartE2EDuration="1m10.9051428s" podCreationTimestamp="2026-02-02 13:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:05:50.90226177 +0000 UTC m=+291.204776159" watchObservedRunningTime="2026-02-02 13:05:50.9051428 +0000 UTC m=+291.207657199" Feb 02 13:05:51 crc kubenswrapper[4721]: I0202 13:05:51.351475 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 13:05:52 crc kubenswrapper[4721]: I0202 13:05:52.126134 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 13:05:56 crc kubenswrapper[4721]: I0202 13:05:56.483883 4721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:05:56 crc kubenswrapper[4721]: I0202 13:05:56.484475 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ed7c7e2a904888901a446739bb41ce209fe73c11466f51414b1b4e5af67c4884" gracePeriod=5 Feb 02 13:06:00 crc kubenswrapper[4721]: I0202 13:06:00.217305 4721 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 13:06:01 crc kubenswrapper[4721]: I0202 13:06:01.949370 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:06:01 crc kubenswrapper[4721]: I0202 13:06:01.949715 4721 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ed7c7e2a904888901a446739bb41ce209fe73c11466f51414b1b4e5af67c4884" exitCode=137 Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.101182 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.101280 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292853 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292883 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292952 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292979 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.292976 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293031 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293051 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293179 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293233 4721 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293248 4721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.293261 4721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.316345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.394087 4721 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.394149 4721 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.416840 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.417118 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.430113 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.430169 4721 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="92c983c1-4477-4a0d-ad15-0fb76214c795" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.432017 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.432089 4721 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="92c983c1-4477-4a0d-ad15-0fb76214c795" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.956019 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.956147 4721 scope.go:117] "RemoveContainer" containerID="ed7c7e2a904888901a446739bb41ce209fe73c11466f51414b1b4e5af67c4884" Feb 02 13:06:02 crc kubenswrapper[4721]: I0202 13:06:02.956308 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 13:06:07 crc kubenswrapper[4721]: I0202 13:06:07.997975 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerID="6e95f003df211d09b9562e86431541c3b7c3e84c41d01ea470d07b5cb914180b" exitCode=0 Feb 02 13:06:07 crc kubenswrapper[4721]: I0202 13:06:07.998098 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" event={"ID":"2c9074bc-889d-4ce7-a250-6fc5984703e0","Type":"ContainerDied","Data":"6e95f003df211d09b9562e86431541c3b7c3e84c41d01ea470d07b5cb914180b"} Feb 02 13:06:08 crc kubenswrapper[4721]: I0202 13:06:07.999640 4721 scope.go:117] "RemoveContainer" containerID="6e95f003df211d09b9562e86431541c3b7c3e84c41d01ea470d07b5cb914180b" Feb 02 13:06:09 crc kubenswrapper[4721]: I0202 13:06:09.005948 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" event={"ID":"2c9074bc-889d-4ce7-a250-6fc5984703e0","Type":"ContainerStarted","Data":"598f0ede0e5be4e9da5dde9217b5303de54f203ff19e55b638f8757dabfd9366"} Feb 02 13:06:09 crc kubenswrapper[4721]: I0202 13:06:09.006338 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:06:09 crc kubenswrapper[4721]: I0202 13:06:09.007507 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:06:12 crc kubenswrapper[4721]: I0202 13:06:12.843005 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:06:12 crc kubenswrapper[4721]: I0202 13:06:12.843968 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerName="controller-manager" containerID="cri-o://d435dac504fe3034e5527129f376c6ed65b5ea3e1fe83d1eb8463d6282795a18" gracePeriod=30 Feb 02 13:06:12 crc kubenswrapper[4721]: I0202 13:06:12.934226 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:06:12 crc kubenswrapper[4721]: I0202 13:06:12.934757 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" podUID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" containerName="route-controller-manager" containerID="cri-o://e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329" gracePeriod=30 Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.030162 4721 generic.go:334] "Generic (PLEG): container finished" podID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerID="d435dac504fe3034e5527129f376c6ed65b5ea3e1fe83d1eb8463d6282795a18" exitCode=0 Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.030217 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" event={"ID":"3c0670a6-888e-40e3-bf5d-82779e70dd1c","Type":"ContainerDied","Data":"d435dac504fe3034e5527129f376c6ed65b5ea3e1fe83d1eb8463d6282795a18"} Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.337670 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.344258 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.444193 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert\") pod \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.444273 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config\") pod \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.444323 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca\") pod \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.444390 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nff5\" (UniqueName: \"kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5\") pod \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.444631 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles\") pod \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\" (UID: \"3c0670a6-888e-40e3-bf5d-82779e70dd1c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.445136 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c0670a6-888e-40e3-bf5d-82779e70dd1c" (UID: "3c0670a6-888e-40e3-bf5d-82779e70dd1c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.445487 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config" (OuterVolumeSpecName: "config") pod "3c0670a6-888e-40e3-bf5d-82779e70dd1c" (UID: "3c0670a6-888e-40e3-bf5d-82779e70dd1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.447427 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c0670a6-888e-40e3-bf5d-82779e70dd1c" (UID: "3c0670a6-888e-40e3-bf5d-82779e70dd1c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.447841 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.447866 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.447879 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c0670a6-888e-40e3-bf5d-82779e70dd1c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.451588 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c0670a6-888e-40e3-bf5d-82779e70dd1c" (UID: "3c0670a6-888e-40e3-bf5d-82779e70dd1c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.451854 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5" (OuterVolumeSpecName: "kube-api-access-4nff5") pod "3c0670a6-888e-40e3-bf5d-82779e70dd1c" (UID: "3c0670a6-888e-40e3-bf5d-82779e70dd1c"). InnerVolumeSpecName "kube-api-access-4nff5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548356 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca\") pod \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config\") pod \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548444 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert\") pod \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548488 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2p87\" (UniqueName: \"kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87\") pod \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\" (UID: \"8f1e834f-23b5-42a5-9d13-b9e5720a597c\") " Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548614 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c0670a6-888e-40e3-bf5d-82779e70dd1c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.548629 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nff5\" (UniqueName: \"kubernetes.io/projected/3c0670a6-888e-40e3-bf5d-82779e70dd1c-kube-api-access-4nff5\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.549497 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f1e834f-23b5-42a5-9d13-b9e5720a597c" (UID: "8f1e834f-23b5-42a5-9d13-b9e5720a597c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.549684 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config" (OuterVolumeSpecName: "config") pod "8f1e834f-23b5-42a5-9d13-b9e5720a597c" (UID: "8f1e834f-23b5-42a5-9d13-b9e5720a597c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.551926 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f1e834f-23b5-42a5-9d13-b9e5720a597c" (UID: "8f1e834f-23b5-42a5-9d13-b9e5720a597c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.551942 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87" (OuterVolumeSpecName: "kube-api-access-w2p87") pod "8f1e834f-23b5-42a5-9d13-b9e5720a597c" (UID: "8f1e834f-23b5-42a5-9d13-b9e5720a597c"). InnerVolumeSpecName "kube-api-access-w2p87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.649313 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.649360 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f1e834f-23b5-42a5-9d13-b9e5720a597c-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.649372 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1e834f-23b5-42a5-9d13-b9e5720a597c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:13 crc kubenswrapper[4721]: I0202 13:06:13.649385 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2p87\" (UniqueName: \"kubernetes.io/projected/8f1e834f-23b5-42a5-9d13-b9e5720a597c-kube-api-access-w2p87\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.045177 4721 generic.go:334] "Generic (PLEG): container finished" podID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" containerID="e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329" exitCode=0 Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.045239 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" event={"ID":"8f1e834f-23b5-42a5-9d13-b9e5720a597c","Type":"ContainerDied","Data":"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329"} Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.045314 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" event={"ID":"8f1e834f-23b5-42a5-9d13-b9e5720a597c","Type":"ContainerDied","Data":"9dac14241b7592e3b43fe2d27aa1874f518d588eab3c2210074f031e8ca8e1b4"} Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.045378 4721 scope.go:117] "RemoveContainer" containerID="e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.045673 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.050170 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" event={"ID":"3c0670a6-888e-40e3-bf5d-82779e70dd1c","Type":"ContainerDied","Data":"167d1cdbeb24f93927ece3e3fa3df789c23a8308344e8f29012657e06e53e904"} Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.050308 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ffkjd" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.068822 4721 scope.go:117] "RemoveContainer" containerID="e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329" Feb 02 13:06:14 crc kubenswrapper[4721]: E0202 13:06:14.071187 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329\": container with ID starting with e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329 not found: ID does not exist" containerID="e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.071246 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329"} err="failed to get container status \"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329\": rpc error: code = NotFound desc = could not find container \"e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329\": container with ID starting with e50c92de6389f2984e09f1db3217014b56b2e9a4e2a44d5d7b627a3954d39329 not found: ID does not exist" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.071288 4721 scope.go:117] "RemoveContainer" containerID="d435dac504fe3034e5527129f376c6ed65b5ea3e1fe83d1eb8463d6282795a18" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.101363 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.104861 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bg49f"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.112714 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.122422 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ffkjd"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251162 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:14 crc kubenswrapper[4721]: E0202 13:06:14.251394 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251407 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:06:14 crc kubenswrapper[4721]: E0202 13:06:14.251419 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" containerName="route-controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251425 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" containerName="route-controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: E0202 13:06:14.251440 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerName="controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251447 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerName="controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251530 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" containerName="controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251540 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251554 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" containerName="route-controller-manager" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.251874 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254010 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254053 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254149 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254197 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254462 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.254616 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.263942 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.265129 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.269502 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.269614 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.269502 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.269793 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.270688 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgl9\" (UniqueName: \"kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.270790 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.270819 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.270862 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.271020 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.270920 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.271258 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.273633 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.287116 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.305291 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgl9\" (UniqueName: \"kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372391 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3faddea-983f-4160-bd5d-0eb17dccf62f-serving-cert\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372430 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-config\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372464 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372533 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372555 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdqf4\" (UniqueName: \"kubernetes.io/projected/c3faddea-983f-4160-bd5d-0eb17dccf62f-kube-api-access-xdqf4\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372582 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.372609 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-client-ca\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.373835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.373864 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.375935 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.381779 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.390645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgl9\" (UniqueName: \"kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9\") pod \"controller-manager-5574d8cf7-r8p5p\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.417558 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0670a6-888e-40e3-bf5d-82779e70dd1c" path="/var/lib/kubelet/pods/3c0670a6-888e-40e3-bf5d-82779e70dd1c/volumes" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.418224 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1e834f-23b5-42a5-9d13-b9e5720a597c" path="/var/lib/kubelet/pods/8f1e834f-23b5-42a5-9d13-b9e5720a597c/volumes" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.473354 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3faddea-983f-4160-bd5d-0eb17dccf62f-serving-cert\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.474199 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-config\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.475427 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-config\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.475434 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqf4\" (UniqueName: \"kubernetes.io/projected/c3faddea-983f-4160-bd5d-0eb17dccf62f-kube-api-access-xdqf4\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.475548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-client-ca\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.476666 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3faddea-983f-4160-bd5d-0eb17dccf62f-client-ca\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.478428 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3faddea-983f-4160-bd5d-0eb17dccf62f-serving-cert\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.502827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdqf4\" (UniqueName: \"kubernetes.io/projected/c3faddea-983f-4160-bd5d-0eb17dccf62f-kube-api-access-xdqf4\") pod \"route-controller-manager-7944cd5597-n48ht\" (UID: \"c3faddea-983f-4160-bd5d-0eb17dccf62f\") " pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.601804 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.613107 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.845989 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht"] Feb 02 13:06:14 crc kubenswrapper[4721]: I0202 13:06:14.902104 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:15 crc kubenswrapper[4721]: I0202 13:06:15.060613 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" event={"ID":"c3faddea-983f-4160-bd5d-0eb17dccf62f","Type":"ContainerStarted","Data":"38aacc58cd27509baf6164a31e18b4d598488511522d77236da9c5fe7c5b5fe9"} Feb 02 13:06:15 crc kubenswrapper[4721]: I0202 13:06:15.061437 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" event={"ID":"2e2b299a-d51b-4bd3-9707-c4a04579e04d","Type":"ContainerStarted","Data":"d56f67b7dea16a1c0a8eddfdc1ce0e7b9e600642d46866721c9ef00924975ed4"} Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.068819 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" event={"ID":"c3faddea-983f-4160-bd5d-0eb17dccf62f","Type":"ContainerStarted","Data":"bb4e5ab76660987913b2ae3c95246cb5a7af0eb7e0f0f7cf6fdef445c9b2b429"} Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.069478 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.073977 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" event={"ID":"2e2b299a-d51b-4bd3-9707-c4a04579e04d","Type":"ContainerStarted","Data":"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d"} Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.074387 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.075428 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.087160 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.100381 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7944cd5597-n48ht" podStartSLOduration=2.100363128 podStartE2EDuration="2.100363128s" podCreationTimestamp="2026-02-02 13:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:16.09137621 +0000 UTC m=+316.393890609" watchObservedRunningTime="2026-02-02 13:06:16.100363128 +0000 UTC m=+316.402877517" Feb 02 13:06:16 crc kubenswrapper[4721]: I0202 13:06:16.140631 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" podStartSLOduration=2.14060627 podStartE2EDuration="2.14060627s" podCreationTimestamp="2026-02-02 13:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:16.136940759 +0000 UTC m=+316.439455168" watchObservedRunningTime="2026-02-02 13:06:16.14060627 +0000 UTC m=+316.443120679" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.350809 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.351368 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" podUID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" containerName="controller-manager" containerID="cri-o://1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d" gracePeriod=30 Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.778804 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.952762 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert\") pod \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.952832 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config\") pod \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.952909 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca\") pod \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.952989 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles\") pod \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.953562 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2e2b299a-d51b-4bd3-9707-c4a04579e04d" (UID: "2e2b299a-d51b-4bd3-9707-c4a04579e04d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.953593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config" (OuterVolumeSpecName: "config") pod "2e2b299a-d51b-4bd3-9707-c4a04579e04d" (UID: "2e2b299a-d51b-4bd3-9707-c4a04579e04d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.953690 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e2b299a-d51b-4bd3-9707-c4a04579e04d" (UID: "2e2b299a-d51b-4bd3-9707-c4a04579e04d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.953904 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcgl9\" (UniqueName: \"kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9\") pod \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\" (UID: \"2e2b299a-d51b-4bd3-9707-c4a04579e04d\") " Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.954251 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.954269 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.954279 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e2b299a-d51b-4bd3-9707-c4a04579e04d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.959477 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9" (OuterVolumeSpecName: "kube-api-access-qcgl9") pod "2e2b299a-d51b-4bd3-9707-c4a04579e04d" (UID: "2e2b299a-d51b-4bd3-9707-c4a04579e04d"). InnerVolumeSpecName "kube-api-access-qcgl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:19 crc kubenswrapper[4721]: I0202 13:06:19.961443 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e2b299a-d51b-4bd3-9707-c4a04579e04d" (UID: "2e2b299a-d51b-4bd3-9707-c4a04579e04d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.055165 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcgl9\" (UniqueName: \"kubernetes.io/projected/2e2b299a-d51b-4bd3-9707-c4a04579e04d-kube-api-access-qcgl9\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.055199 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e2b299a-d51b-4bd3-9707-c4a04579e04d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.103396 4721 generic.go:334] "Generic (PLEG): container finished" podID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" containerID="1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d" exitCode=0 Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.103435 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.103448 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" event={"ID":"2e2b299a-d51b-4bd3-9707-c4a04579e04d","Type":"ContainerDied","Data":"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d"} Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.103479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-r8p5p" event={"ID":"2e2b299a-d51b-4bd3-9707-c4a04579e04d","Type":"ContainerDied","Data":"d56f67b7dea16a1c0a8eddfdc1ce0e7b9e600642d46866721c9ef00924975ed4"} Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.103501 4721 scope.go:117] "RemoveContainer" containerID="1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.125172 4721 scope.go:117] "RemoveContainer" containerID="1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d" Feb 02 13:06:20 crc kubenswrapper[4721]: E0202 13:06:20.125609 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d\": container with ID starting with 1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d not found: ID does not exist" containerID="1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.125666 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d"} err="failed to get container status \"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d\": rpc error: code = NotFound desc = could not find container \"1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d\": container with ID starting with 1aab96e4774bff0ac80b1ac1190055d2e982db0b177f1befdc8d948f25085b9d not found: ID does not exist" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.132983 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.136260 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-r8p5p"] Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.416348 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" path="/var/lib/kubelet/pods/2e2b299a-d51b-4bd3-9707-c4a04579e04d/volumes" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.466896 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:20 crc kubenswrapper[4721]: E0202 13:06:20.467121 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" containerName="controller-manager" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.467135 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" containerName="controller-manager" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.467248 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2b299a-d51b-4bd3-9707-c4a04579e04d" containerName="controller-manager" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.467609 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.470310 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.470673 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.470840 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.471671 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.472497 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.475606 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.476682 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.485840 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.661880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.661957 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n98g\" (UniqueName: \"kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.661986 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.662003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.662037 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.763520 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.763840 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n98g\" (UniqueName: \"kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.763869 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.763886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.763923 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.764844 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.765125 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.765526 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.775444 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.782868 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n98g\" (UniqueName: \"kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g\") pod \"controller-manager-77b768d98c-wq8j4\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:20 crc kubenswrapper[4721]: I0202 13:06:20.783184 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:21 crc kubenswrapper[4721]: I0202 13:06:21.012937 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:21 crc kubenswrapper[4721]: W0202 13:06:21.017655 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25a832ca_eec3_4483_b12a_4bc922d51326.slice/crio-12aa00937c8b0761da5e1b6892ac57c359e37da61e443d450319dba4f1cacec8 WatchSource:0}: Error finding container 12aa00937c8b0761da5e1b6892ac57c359e37da61e443d450319dba4f1cacec8: Status 404 returned error can't find the container with id 12aa00937c8b0761da5e1b6892ac57c359e37da61e443d450319dba4f1cacec8 Feb 02 13:06:21 crc kubenswrapper[4721]: I0202 13:06:21.111038 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" event={"ID":"25a832ca-eec3-4483-b12a-4bc922d51326","Type":"ContainerStarted","Data":"12aa00937c8b0761da5e1b6892ac57c359e37da61e443d450319dba4f1cacec8"} Feb 02 13:06:22 crc kubenswrapper[4721]: I0202 13:06:22.123011 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" event={"ID":"25a832ca-eec3-4483-b12a-4bc922d51326","Type":"ContainerStarted","Data":"816a45936f6803b12e0ea97f51048b495653f4de87354e7bba1a4d4d4b951e79"} Feb 02 13:06:22 crc kubenswrapper[4721]: I0202 13:06:22.123606 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:22 crc kubenswrapper[4721]: I0202 13:06:22.130391 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:22 crc kubenswrapper[4721]: I0202 13:06:22.151277 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" podStartSLOduration=3.151247385 podStartE2EDuration="3.151247385s" podCreationTimestamp="2026-02-02 13:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:22.148285683 +0000 UTC m=+322.450800072" watchObservedRunningTime="2026-02-02 13:06:22.151247385 +0000 UTC m=+322.453761774" Feb 02 13:06:32 crc kubenswrapper[4721]: I0202 13:06:32.824051 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:32 crc kubenswrapper[4721]: I0202 13:06:32.824871 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" podUID="25a832ca-eec3-4483-b12a-4bc922d51326" containerName="controller-manager" containerID="cri-o://816a45936f6803b12e0ea97f51048b495653f4de87354e7bba1a4d4d4b951e79" gracePeriod=30 Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.207460 4721 generic.go:334] "Generic (PLEG): container finished" podID="25a832ca-eec3-4483-b12a-4bc922d51326" containerID="816a45936f6803b12e0ea97f51048b495653f4de87354e7bba1a4d4d4b951e79" exitCode=0 Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.207678 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" event={"ID":"25a832ca-eec3-4483-b12a-4bc922d51326","Type":"ContainerDied","Data":"816a45936f6803b12e0ea97f51048b495653f4de87354e7bba1a4d4d4b951e79"} Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.328007 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.515406 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config\") pod \"25a832ca-eec3-4483-b12a-4bc922d51326\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.515570 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n98g\" (UniqueName: \"kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g\") pod \"25a832ca-eec3-4483-b12a-4bc922d51326\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.515600 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles\") pod \"25a832ca-eec3-4483-b12a-4bc922d51326\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.515647 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca\") pod \"25a832ca-eec3-4483-b12a-4bc922d51326\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.515675 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert\") pod \"25a832ca-eec3-4483-b12a-4bc922d51326\" (UID: \"25a832ca-eec3-4483-b12a-4bc922d51326\") " Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.517586 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "25a832ca-eec3-4483-b12a-4bc922d51326" (UID: "25a832ca-eec3-4483-b12a-4bc922d51326"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.517666 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config" (OuterVolumeSpecName: "config") pod "25a832ca-eec3-4483-b12a-4bc922d51326" (UID: "25a832ca-eec3-4483-b12a-4bc922d51326"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.517677 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca" (OuterVolumeSpecName: "client-ca") pod "25a832ca-eec3-4483-b12a-4bc922d51326" (UID: "25a832ca-eec3-4483-b12a-4bc922d51326"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.521396 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g" (OuterVolumeSpecName: "kube-api-access-2n98g") pod "25a832ca-eec3-4483-b12a-4bc922d51326" (UID: "25a832ca-eec3-4483-b12a-4bc922d51326"). InnerVolumeSpecName "kube-api-access-2n98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.521434 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "25a832ca-eec3-4483-b12a-4bc922d51326" (UID: "25a832ca-eec3-4483-b12a-4bc922d51326"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.617795 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n98g\" (UniqueName: \"kubernetes.io/projected/25a832ca-eec3-4483-b12a-4bc922d51326-kube-api-access-2n98g\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.617838 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.617848 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.617857 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a832ca-eec3-4483-b12a-4bc922d51326-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:33 crc kubenswrapper[4721]: I0202 13:06:33.617865 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a832ca-eec3-4483-b12a-4bc922d51326-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.215142 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" event={"ID":"25a832ca-eec3-4483-b12a-4bc922d51326","Type":"ContainerDied","Data":"12aa00937c8b0761da5e1b6892ac57c359e37da61e443d450319dba4f1cacec8"} Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.215203 4721 scope.go:117] "RemoveContainer" containerID="816a45936f6803b12e0ea97f51048b495653f4de87354e7bba1a4d4d4b951e79" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.215215 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b768d98c-wq8j4" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.242231 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.246591 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77b768d98c-wq8j4"] Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.416739 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a832ca-eec3-4483-b12a-4bc922d51326" path="/var/lib/kubelet/pods/25a832ca-eec3-4483-b12a-4bc922d51326/volumes" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.477373 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-t7w7b"] Feb 02 13:06:34 crc kubenswrapper[4721]: E0202 13:06:34.477626 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a832ca-eec3-4483-b12a-4bc922d51326" containerName="controller-manager" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.477641 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a832ca-eec3-4483-b12a-4bc922d51326" containerName="controller-manager" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.477760 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a832ca-eec3-4483-b12a-4bc922d51326" containerName="controller-manager" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.478213 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.480901 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.481014 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.482412 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.482632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.482777 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.482982 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.493001 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.494974 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-t7w7b"] Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.531923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6266a42-238a-497e-ba45-2994385106f8-serving-cert\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.531980 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.532026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-client-ca\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.532061 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-config\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.532213 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgmv\" (UniqueName: \"kubernetes.io/projected/e6266a42-238a-497e-ba45-2994385106f8-kube-api-access-cfgmv\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.633662 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6266a42-238a-497e-ba45-2994385106f8-serving-cert\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.633719 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.633754 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-client-ca\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.633783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-config\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.633799 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfgmv\" (UniqueName: \"kubernetes.io/projected/e6266a42-238a-497e-ba45-2994385106f8-kube-api-access-cfgmv\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.635167 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-client-ca\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.635360 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-proxy-ca-bundles\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.635632 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6266a42-238a-497e-ba45-2994385106f8-config\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.644120 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6266a42-238a-497e-ba45-2994385106f8-serving-cert\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.650117 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfgmv\" (UniqueName: \"kubernetes.io/projected/e6266a42-238a-497e-ba45-2994385106f8-kube-api-access-cfgmv\") pod \"controller-manager-5574d8cf7-t7w7b\" (UID: \"e6266a42-238a-497e-ba45-2994385106f8\") " pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:34 crc kubenswrapper[4721]: I0202 13:06:34.794922 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:35 crc kubenswrapper[4721]: I0202 13:06:35.256702 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574d8cf7-t7w7b"] Feb 02 13:06:35 crc kubenswrapper[4721]: W0202 13:06:35.259389 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6266a42_238a_497e_ba45_2994385106f8.slice/crio-590353e2a8b6fe748bdf645c184eaf1162cf3eca687c3a54fb0c9e29035596fd WatchSource:0}: Error finding container 590353e2a8b6fe748bdf645c184eaf1162cf3eca687c3a54fb0c9e29035596fd: Status 404 returned error can't find the container with id 590353e2a8b6fe748bdf645c184eaf1162cf3eca687c3a54fb0c9e29035596fd Feb 02 13:06:36 crc kubenswrapper[4721]: I0202 13:06:36.230107 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" event={"ID":"e6266a42-238a-497e-ba45-2994385106f8","Type":"ContainerStarted","Data":"ddab1107d92641a945d2a1cecc58ed8908eb0412dd2134e2a415a3ab9dac8b54"} Feb 02 13:06:36 crc kubenswrapper[4721]: I0202 13:06:36.230486 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" event={"ID":"e6266a42-238a-497e-ba45-2994385106f8","Type":"ContainerStarted","Data":"590353e2a8b6fe748bdf645c184eaf1162cf3eca687c3a54fb0c9e29035596fd"} Feb 02 13:06:36 crc kubenswrapper[4721]: I0202 13:06:36.230509 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:36 crc kubenswrapper[4721]: I0202 13:06:36.236770 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" Feb 02 13:06:36 crc kubenswrapper[4721]: I0202 13:06:36.248244 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5574d8cf7-t7w7b" podStartSLOduration=4.248222593 podStartE2EDuration="4.248222593s" podCreationTimestamp="2026-02-02 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:36.248174041 +0000 UTC m=+336.550688440" watchObservedRunningTime="2026-02-02 13:06:36.248222593 +0000 UTC m=+336.550736982" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.719938 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n5k57"] Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.721327 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.732133 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n5k57"] Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822601 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgff4\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-kube-api-access-tgff4\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822675 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-trusted-ca\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822719 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5628b062-be01-4627-aec5-247e0de021e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-bound-sa-token\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822781 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822806 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-registry-certificates\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822921 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5628b062-be01-4627-aec5-247e0de021e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.822959 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-registry-tls\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.847617 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924137 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5628b062-be01-4627-aec5-247e0de021e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924212 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-registry-tls\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924245 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgff4\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-kube-api-access-tgff4\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924283 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-trusted-ca\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924325 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5628b062-be01-4627-aec5-247e0de021e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924359 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-bound-sa-token\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.924382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-registry-certificates\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.925033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5628b062-be01-4627-aec5-247e0de021e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.925954 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-trusted-ca\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.926056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5628b062-be01-4627-aec5-247e0de021e7-registry-certificates\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.929666 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5628b062-be01-4627-aec5-247e0de021e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.929693 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-registry-tls\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.943148 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgff4\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-kube-api-access-tgff4\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:47 crc kubenswrapper[4721]: I0202 13:06:47.946955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5628b062-be01-4627-aec5-247e0de021e7-bound-sa-token\") pod \"image-registry-66df7c8f76-n5k57\" (UID: \"5628b062-be01-4627-aec5-247e0de021e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:48 crc kubenswrapper[4721]: I0202 13:06:48.047697 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:48 crc kubenswrapper[4721]: I0202 13:06:48.464361 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n5k57"] Feb 02 13:06:48 crc kubenswrapper[4721]: W0202 13:06:48.470122 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5628b062_be01_4627_aec5_247e0de021e7.slice/crio-5f212530e3225ba18d9ece1c0311befb7bd5e169d57efacc921a917e4380b790 WatchSource:0}: Error finding container 5f212530e3225ba18d9ece1c0311befb7bd5e169d57efacc921a917e4380b790: Status 404 returned error can't find the container with id 5f212530e3225ba18d9ece1c0311befb7bd5e169d57efacc921a917e4380b790 Feb 02 13:06:49 crc kubenswrapper[4721]: I0202 13:06:49.301705 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" event={"ID":"5628b062-be01-4627-aec5-247e0de021e7","Type":"ContainerStarted","Data":"fac662d7e1da20f0d1cda3f9a98cb1097dd68dfa04e8e668547a49ffebd04c7d"} Feb 02 13:06:49 crc kubenswrapper[4721]: I0202 13:06:49.302039 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:06:49 crc kubenswrapper[4721]: I0202 13:06:49.302052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" event={"ID":"5628b062-be01-4627-aec5-247e0de021e7","Type":"ContainerStarted","Data":"5f212530e3225ba18d9ece1c0311befb7bd5e169d57efacc921a917e4380b790"} Feb 02 13:06:49 crc kubenswrapper[4721]: I0202 13:06:49.323965 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" podStartSLOduration=2.323945819 podStartE2EDuration="2.323945819s" podCreationTimestamp="2026-02-02 13:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:06:49.318054376 +0000 UTC m=+349.620568765" watchObservedRunningTime="2026-02-02 13:06:49.323945819 +0000 UTC m=+349.626460218" Feb 02 13:07:08 crc kubenswrapper[4721]: I0202 13:07:08.054824 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-n5k57" Feb 02 13:07:08 crc kubenswrapper[4721]: I0202 13:07:08.104630 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.794378 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.795260 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s2tcj" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="registry-server" containerID="cri-o://53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970" gracePeriod=30 Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.807475 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.807727 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftf6s" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="registry-server" containerID="cri-o://410edca835d21d18be323b803ab840df97552992c0ec9f405104a39dad3828e0" gracePeriod=30 Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.812994 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.813289 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" containerID="cri-o://598f0ede0e5be4e9da5dde9217b5303de54f203ff19e55b638f8757dabfd9366" gracePeriod=30 Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.823770 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.824048 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-75gx6" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="registry-server" containerID="cri-o://7fb035f08da78b4db85bc1ea0be2acf6c348cb105b55a68b2cac0ebf26b8bc1a" gracePeriod=30 Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.831794 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.839692 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pm5t7" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="registry-server" containerID="cri-o://33ab21456de0805136a88c1d6ecd74979aabd0d3bfbd8cc958f6e35c18ed050d" gracePeriod=30 Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.850750 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdnhz"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.852222 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.857835 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdnhz"] Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.863326 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.863378 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzwx\" (UniqueName: \"kubernetes.io/projected/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-kube-api-access-lhzwx\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.863405 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.965138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.965185 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzwx\" (UniqueName: \"kubernetes.io/projected/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-kube-api-access-lhzwx\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.965208 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.968760 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.972844 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:10 crc kubenswrapper[4721]: I0202 13:07:10.984213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzwx\" (UniqueName: \"kubernetes.io/projected/884fbbc4-b86d-4f88-9fc6-2aa2015b81d3-kube-api-access-lhzwx\") pod \"marketplace-operator-79b997595-wdnhz\" (UID: \"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.252851 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.262792 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.269283 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgmqw\" (UniqueName: \"kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw\") pod \"c32d34a1-8dd8-435d-9491-748392c25b97\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.269374 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities\") pod \"c32d34a1-8dd8-435d-9491-748392c25b97\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.269456 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content\") pod \"c32d34a1-8dd8-435d-9491-748392c25b97\" (UID: \"c32d34a1-8dd8-435d-9491-748392c25b97\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.270994 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities" (OuterVolumeSpecName: "utilities") pod "c32d34a1-8dd8-435d-9491-748392c25b97" (UID: "c32d34a1-8dd8-435d-9491-748392c25b97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.277324 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw" (OuterVolumeSpecName: "kube-api-access-bgmqw") pod "c32d34a1-8dd8-435d-9491-748392c25b97" (UID: "c32d34a1-8dd8-435d-9491-748392c25b97"). InnerVolumeSpecName "kube-api-access-bgmqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.343829 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c32d34a1-8dd8-435d-9491-748392c25b97" (UID: "c32d34a1-8dd8-435d-9491-748392c25b97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.370559 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.370602 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgmqw\" (UniqueName: \"kubernetes.io/projected/c32d34a1-8dd8-435d-9491-748392c25b97-kube-api-access-bgmqw\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.370619 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c32d34a1-8dd8-435d-9491-748392c25b97-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.452185 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.457543 4721 generic.go:334] "Generic (PLEG): container finished" podID="b97707af-edd5-4907-9459-615b32a005e6" containerID="33ab21456de0805136a88c1d6ecd74979aabd0d3bfbd8cc958f6e35c18ed050d" exitCode=0 Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.457610 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerDied","Data":"33ab21456de0805136a88c1d6ecd74979aabd0d3bfbd8cc958f6e35c18ed050d"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.459994 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerID="598f0ede0e5be4e9da5dde9217b5303de54f203ff19e55b638f8757dabfd9366" exitCode=0 Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.460042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" event={"ID":"2c9074bc-889d-4ce7-a250-6fc5984703e0","Type":"ContainerDied","Data":"598f0ede0e5be4e9da5dde9217b5303de54f203ff19e55b638f8757dabfd9366"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.460173 4721 scope.go:117] "RemoveContainer" containerID="598f0ede0e5be4e9da5dde9217b5303de54f203ff19e55b638f8757dabfd9366" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.460333 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zcf44" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.470350 4721 generic.go:334] "Generic (PLEG): container finished" podID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerID="7fb035f08da78b4db85bc1ea0be2acf6c348cb105b55a68b2cac0ebf26b8bc1a" exitCode=0 Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.476377 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerDied","Data":"7fb035f08da78b4db85bc1ea0be2acf6c348cb105b55a68b2cac0ebf26b8bc1a"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.490766 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.491123 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2tcj" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.491018 4721 generic.go:334] "Generic (PLEG): container finished" podID="c32d34a1-8dd8-435d-9491-748392c25b97" containerID="53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970" exitCode=0 Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.491042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerDied","Data":"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.492415 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2tcj" event={"ID":"c32d34a1-8dd8-435d-9491-748392c25b97","Type":"ContainerDied","Data":"607b84235315a7323834248f190f48d1f32164d2db07ef1dcc79f0ce6457a6d0"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.496309 4721 generic.go:334] "Generic (PLEG): container finished" podID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerID="410edca835d21d18be323b803ab840df97552992c0ec9f405104a39dad3828e0" exitCode=0 Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.496356 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerDied","Data":"410edca835d21d18be323b803ab840df97552992c0ec9f405104a39dad3828e0"} Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.496459 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.499192 4721 scope.go:117] "RemoveContainer" containerID="6e95f003df211d09b9562e86431541c3b7c3e84c41d01ea470d07b5cb914180b" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.502487 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.521299 4721 scope.go:117] "RemoveContainer" containerID="53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.541922 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.542308 4721 scope.go:117] "RemoveContainer" containerID="cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.558631 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s2tcj"] Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.572923 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca\") pod \"2c9074bc-889d-4ce7-a250-6fc5984703e0\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.572968 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics\") pod \"2c9074bc-889d-4ce7-a250-6fc5984703e0\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.573032 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f\") pod \"2c9074bc-889d-4ce7-a250-6fc5984703e0\" (UID: \"2c9074bc-889d-4ce7-a250-6fc5984703e0\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.574290 4721 scope.go:117] "RemoveContainer" containerID="70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.577251 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2c9074bc-889d-4ce7-a250-6fc5984703e0" (UID: "2c9074bc-889d-4ce7-a250-6fc5984703e0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.577696 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2c9074bc-889d-4ce7-a250-6fc5984703e0" (UID: "2c9074bc-889d-4ce7-a250-6fc5984703e0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.579785 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f" (OuterVolumeSpecName: "kube-api-access-xn92f") pod "2c9074bc-889d-4ce7-a250-6fc5984703e0" (UID: "2c9074bc-889d-4ce7-a250-6fc5984703e0"). InnerVolumeSpecName "kube-api-access-xn92f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.587559 4721 scope.go:117] "RemoveContainer" containerID="53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970" Feb 02 13:07:11 crc kubenswrapper[4721]: E0202 13:07:11.587858 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970\": container with ID starting with 53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970 not found: ID does not exist" containerID="53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.587990 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970"} err="failed to get container status \"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970\": rpc error: code = NotFound desc = could not find container \"53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970\": container with ID starting with 53d55fa7bab62269a87c795054de164d19a14b295c5f255ec1a3087c58689970 not found: ID does not exist" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.588014 4721 scope.go:117] "RemoveContainer" containerID="cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76" Feb 02 13:07:11 crc kubenswrapper[4721]: E0202 13:07:11.588360 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76\": container with ID starting with cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76 not found: ID does not exist" containerID="cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.588386 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76"} err="failed to get container status \"cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76\": rpc error: code = NotFound desc = could not find container \"cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76\": container with ID starting with cb844a088120bc9509ad8ea45c170bf127b70cd4144302a7ad6ac587245cde76 not found: ID does not exist" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.588421 4721 scope.go:117] "RemoveContainer" containerID="70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d" Feb 02 13:07:11 crc kubenswrapper[4721]: E0202 13:07:11.588679 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d\": container with ID starting with 70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d not found: ID does not exist" containerID="70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.588732 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d"} err="failed to get container status \"70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d\": rpc error: code = NotFound desc = could not find container \"70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d\": container with ID starting with 70aefbce2f033e0a1ab3507eb5f25cc0466994c0e699fb1e5db5f4596c72d39d not found: ID does not exist" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674355 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content\") pod \"b97707af-edd5-4907-9459-615b32a005e6\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674399 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srrj7\" (UniqueName: \"kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7\") pod \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674457 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsbmd\" (UniqueName: \"kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd\") pod \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674484 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities\") pod \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674527 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities\") pod \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674551 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities\") pod \"b97707af-edd5-4907-9459-615b32a005e6\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674573 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content\") pod \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\" (UID: \"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674601 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content\") pod \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\" (UID: \"b11b9dcc-682e-48c6-9948-78aafcaf9e36\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674621 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk875\" (UniqueName: \"kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875\") pod \"b97707af-edd5-4907-9459-615b32a005e6\" (UID: \"b97707af-edd5-4907-9459-615b32a005e6\") " Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674807 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674823 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2c9074bc-889d-4ce7-a250-6fc5984703e0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.674835 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn92f\" (UniqueName: \"kubernetes.io/projected/2c9074bc-889d-4ce7-a250-6fc5984703e0-kube-api-access-xn92f\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.675724 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities" (OuterVolumeSpecName: "utilities") pod "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" (UID: "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.676031 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities" (OuterVolumeSpecName: "utilities") pod "b11b9dcc-682e-48c6-9948-78aafcaf9e36" (UID: "b11b9dcc-682e-48c6-9948-78aafcaf9e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.676589 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities" (OuterVolumeSpecName: "utilities") pod "b97707af-edd5-4907-9459-615b32a005e6" (UID: "b97707af-edd5-4907-9459-615b32a005e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.677168 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd" (OuterVolumeSpecName: "kube-api-access-gsbmd") pod "b11b9dcc-682e-48c6-9948-78aafcaf9e36" (UID: "b11b9dcc-682e-48c6-9948-78aafcaf9e36"). InnerVolumeSpecName "kube-api-access-gsbmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.680420 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875" (OuterVolumeSpecName: "kube-api-access-mk875") pod "b97707af-edd5-4907-9459-615b32a005e6" (UID: "b97707af-edd5-4907-9459-615b32a005e6"). InnerVolumeSpecName "kube-api-access-mk875". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.686797 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7" (OuterVolumeSpecName: "kube-api-access-srrj7") pod "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" (UID: "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b"). InnerVolumeSpecName "kube-api-access-srrj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.703712 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11b9dcc-682e-48c6-9948-78aafcaf9e36" (UID: "b11b9dcc-682e-48c6-9948-78aafcaf9e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.733908 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" (UID: "7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779272 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779306 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779320 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779333 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11b9dcc-682e-48c6-9948-78aafcaf9e36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779348 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk875\" (UniqueName: \"kubernetes.io/projected/b97707af-edd5-4907-9459-615b32a005e6-kube-api-access-mk875\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779362 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srrj7\" (UniqueName: \"kubernetes.io/projected/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-kube-api-access-srrj7\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779375 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsbmd\" (UniqueName: \"kubernetes.io/projected/b11b9dcc-682e-48c6-9948-78aafcaf9e36-kube-api-access-gsbmd\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.779387 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.800658 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdnhz"] Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.805735 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.811981 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b97707af-edd5-4907-9459-615b32a005e6" (UID: "b97707af-edd5-4907-9459-615b32a005e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.823262 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zcf44"] Feb 02 13:07:11 crc kubenswrapper[4721]: I0202 13:07:11.880223 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97707af-edd5-4907-9459-615b32a005e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.424753 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" path="/var/lib/kubelet/pods/2c9074bc-889d-4ce7-a250-6fc5984703e0/volumes" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.425749 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" path="/var/lib/kubelet/pods/c32d34a1-8dd8-435d-9491-748392c25b97/volumes" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.503951 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftf6s" event={"ID":"7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b","Type":"ContainerDied","Data":"ff138d00e2dec0f6fe53dd62f78ed24adffd461fe550704795a81bdea55a7066"} Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.503979 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftf6s" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.504056 4721 scope.go:117] "RemoveContainer" containerID="410edca835d21d18be323b803ab840df97552992c0ec9f405104a39dad3828e0" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.505594 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm5t7" event={"ID":"b97707af-edd5-4907-9459-615b32a005e6","Type":"ContainerDied","Data":"731af821f39e85a65313814ab808bf2c6795132d15116e8c0a34a91225b2d5b6"} Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.505707 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm5t7" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.513446 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75gx6" event={"ID":"b11b9dcc-682e-48c6-9948-78aafcaf9e36","Type":"ContainerDied","Data":"5628d04181a13e0213caa7a951b015bba8003374b2bb6f608199a4eba95c3b17"} Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.513503 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75gx6" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.517954 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" event={"ID":"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3","Type":"ContainerStarted","Data":"d10ae26e9bb0464a19801ffbe62c8af355edb1f673e6e65d0e5f00acc648b10b"} Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.518038 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" event={"ID":"884fbbc4-b86d-4f88-9fc6-2aa2015b81d3","Type":"ContainerStarted","Data":"e5a898e7a9533cf3d6dd33e89749a46f724661ae5084c8e13dd0fdb9f012eca3"} Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.518343 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.523878 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.532095 4721 scope.go:117] "RemoveContainer" containerID="9828d087ebd0a3d61cf032280a26f38ac894879bfffd8ab8dc7b9c9e262b96fd" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.536360 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.538205 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pm5t7"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.554589 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.563133 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftf6s"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.564083 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.565827 4721 scope.go:117] "RemoveContainer" containerID="c4262d6d2388653a81a9cb645a74803128568514778cf935a45ab36a4268cbc6" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.566861 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-75gx6"] Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.581857 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wdnhz" podStartSLOduration=2.581840407 podStartE2EDuration="2.581840407s" podCreationTimestamp="2026-02-02 13:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:07:12.579368349 +0000 UTC m=+372.881882758" watchObservedRunningTime="2026-02-02 13:07:12.581840407 +0000 UTC m=+372.884354796" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.589343 4721 scope.go:117] "RemoveContainer" containerID="33ab21456de0805136a88c1d6ecd74979aabd0d3bfbd8cc958f6e35c18ed050d" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.619000 4721 scope.go:117] "RemoveContainer" containerID="0d83a50e35994a069f84ac2fbbbdd424065961585a6b4d7fa1391779a81dfd2b" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.642046 4721 scope.go:117] "RemoveContainer" containerID="41599e7535f02f311fe8e5965707307ae8f5502aec8ceadc6ba6ac29d4504579" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.660879 4721 scope.go:117] "RemoveContainer" containerID="7fb035f08da78b4db85bc1ea0be2acf6c348cb105b55a68b2cac0ebf26b8bc1a" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.686342 4721 scope.go:117] "RemoveContainer" containerID="e1bc2f4e0704a9ccfc7c0081b3e11299c3f9620d4305892b40a44031af50c66b" Feb 02 13:07:12 crc kubenswrapper[4721]: I0202 13:07:12.705343 4721 scope.go:117] "RemoveContainer" containerID="d98e8c3180feeb272dbc337ede325b3ee8bdf7c11b2445546d5a7351f1d071c3" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019256 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c64xc"] Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019528 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019540 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019550 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019556 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019568 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019574 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019584 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019589 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019596 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019601 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019614 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019620 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019630 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019636 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019643 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019649 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019660 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019665 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="extract-utilities" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019673 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019680 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019691 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019697 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019704 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019709 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019718 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019723 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="extract-content" Feb 02 13:07:13 crc kubenswrapper[4721]: E0202 13:07:13.019732 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019737 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019821 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019829 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019841 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32d34a1-8dd8-435d-9491-748392c25b97" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019847 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019857 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9074bc-889d-4ce7-a250-6fc5984703e0" containerName="marketplace-operator" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.019863 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97707af-edd5-4907-9459-615b32a005e6" containerName="registry-server" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.020590 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.022333 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.023277 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c64xc"] Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.209169 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drnj\" (UniqueName: \"kubernetes.io/projected/8851d4c5-8c20-440c-bb07-d7542ea1620d-kube-api-access-8drnj\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.209278 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-utilities\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.209313 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-catalog-content\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.213173 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.215054 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.219525 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.221538 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.310359 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-utilities\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.310414 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-catalog-content\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.310734 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drnj\" (UniqueName: \"kubernetes.io/projected/8851d4c5-8c20-440c-bb07-d7542ea1620d-kube-api-access-8drnj\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.310853 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-catalog-content\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.311401 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8851d4c5-8c20-440c-bb07-d7542ea1620d-utilities\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.338835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drnj\" (UniqueName: \"kubernetes.io/projected/8851d4c5-8c20-440c-bb07-d7542ea1620d-kube-api-access-8drnj\") pod \"redhat-marketplace-c64xc\" (UID: \"8851d4c5-8c20-440c-bb07-d7542ea1620d\") " pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.344301 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.412274 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.412563 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.412679 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9grc\" (UniqueName: \"kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.513461 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.513528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.513591 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9grc\" (UniqueName: \"kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.513987 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.514243 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.537205 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9grc\" (UniqueName: \"kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc\") pod \"redhat-operators-gc4db\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.547489 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.754497 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c64xc"] Feb 02 13:07:13 crc kubenswrapper[4721]: W0202 13:07:13.766497 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8851d4c5_8c20_440c_bb07_d7542ea1620d.slice/crio-96be71087000d8a85cbf561c42377e0d2351a76a62b024a0cafc13962b0aa12f WatchSource:0}: Error finding container 96be71087000d8a85cbf561c42377e0d2351a76a62b024a0cafc13962b0aa12f: Status 404 returned error can't find the container with id 96be71087000d8a85cbf561c42377e0d2351a76a62b024a0cafc13962b0aa12f Feb 02 13:07:13 crc kubenswrapper[4721]: I0202 13:07:13.954933 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:07:14 crc kubenswrapper[4721]: W0202 13:07:14.023857 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4d1a7c_52fd_456d_ab0e_78a9c4529fd1.slice/crio-c443084e3e1b254cd6cca1fcfdbe64c90be36a3dfa2b67b63b03d3820015e610 WatchSource:0}: Error finding container c443084e3e1b254cd6cca1fcfdbe64c90be36a3dfa2b67b63b03d3820015e610: Status 404 returned error can't find the container with id c443084e3e1b254cd6cca1fcfdbe64c90be36a3dfa2b67b63b03d3820015e610 Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.417639 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b" path="/var/lib/kubelet/pods/7d9a2b55-8601-4fdc-9e0a-1cfe62d3754b/volumes" Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.419050 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11b9dcc-682e-48c6-9948-78aafcaf9e36" path="/var/lib/kubelet/pods/b11b9dcc-682e-48c6-9948-78aafcaf9e36/volumes" Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.420141 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97707af-edd5-4907-9459-615b32a005e6" path="/var/lib/kubelet/pods/b97707af-edd5-4907-9459-615b32a005e6/volumes" Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.534856 4721 generic.go:334] "Generic (PLEG): container finished" podID="8851d4c5-8c20-440c-bb07-d7542ea1620d" containerID="589fb384ac7240dcc9c807d4c1ce3907384769a460707e2d0ea0f1f488fe20b1" exitCode=0 Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.534903 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c64xc" event={"ID":"8851d4c5-8c20-440c-bb07-d7542ea1620d","Type":"ContainerDied","Data":"589fb384ac7240dcc9c807d4c1ce3907384769a460707e2d0ea0f1f488fe20b1"} Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.534925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c64xc" event={"ID":"8851d4c5-8c20-440c-bb07-d7542ea1620d","Type":"ContainerStarted","Data":"96be71087000d8a85cbf561c42377e0d2351a76a62b024a0cafc13962b0aa12f"} Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.538532 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerID="b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f" exitCode=0 Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.538962 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerDied","Data":"b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f"} Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.539034 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerStarted","Data":"c443084e3e1b254cd6cca1fcfdbe64c90be36a3dfa2b67b63b03d3820015e610"} Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.764083 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:07:14 crc kubenswrapper[4721]: I0202 13:07:14.764153 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.406920 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.408398 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.417473 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.421403 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.444463 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.444527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j2g\" (UniqueName: \"kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.444568 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.546737 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.547162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.547206 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9j2g\" (UniqueName: \"kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.547307 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.547522 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.577402 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9j2g\" (UniqueName: \"kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g\") pod \"certified-operators-w5wlg\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.619143 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kv46m"] Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.621090 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.626085 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv46m"] Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.626666 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.648218 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-catalog-content\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.648282 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svst\" (UniqueName: \"kubernetes.io/projected/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-kube-api-access-5svst\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.648316 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-utilities\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.736196 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.749590 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svst\" (UniqueName: \"kubernetes.io/projected/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-kube-api-access-5svst\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.749662 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-utilities\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.749726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-catalog-content\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.750316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-catalog-content\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.750427 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-utilities\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.769804 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svst\" (UniqueName: \"kubernetes.io/projected/be9ad0b8-eef7-451f-82b9-1b5cc54c63c2-kube-api-access-5svst\") pod \"community-operators-kv46m\" (UID: \"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2\") " pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:15 crc kubenswrapper[4721]: I0202 13:07:15.966804 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.420909 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv46m"] Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.471582 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:07:16 crc kubenswrapper[4721]: W0202 13:07:16.481612 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db39b59_16bf_4029_b8be_4be395b09cdf.slice/crio-aa53f668db48e2aa112a5cbfcc3d2601a92bffa612a49b3105e8618823c15e6b WatchSource:0}: Error finding container aa53f668db48e2aa112a5cbfcc3d2601a92bffa612a49b3105e8618823c15e6b: Status 404 returned error can't find the container with id aa53f668db48e2aa112a5cbfcc3d2601a92bffa612a49b3105e8618823c15e6b Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.551009 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv46m" event={"ID":"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2","Type":"ContainerStarted","Data":"13b55abfcd9ca12983bfa4cac3819ce020504592eb47d09b42cdb49038a429be"} Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.552242 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerStarted","Data":"aa53f668db48e2aa112a5cbfcc3d2601a92bffa612a49b3105e8618823c15e6b"} Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.555641 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerStarted","Data":"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6"} Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.559231 4721 generic.go:334] "Generic (PLEG): container finished" podID="8851d4c5-8c20-440c-bb07-d7542ea1620d" containerID="b921fdc1a1548d0c5260ec96ba224ac1d42c622bf506f78cd51ecd903122aa38" exitCode=0 Feb 02 13:07:16 crc kubenswrapper[4721]: I0202 13:07:16.559271 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c64xc" event={"ID":"8851d4c5-8c20-440c-bb07-d7542ea1620d","Type":"ContainerDied","Data":"b921fdc1a1548d0c5260ec96ba224ac1d42c622bf506f78cd51ecd903122aa38"} Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.566676 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c64xc" event={"ID":"8851d4c5-8c20-440c-bb07-d7542ea1620d","Type":"ContainerStarted","Data":"b883177debaaa82956b438155412f9e795a9ef3cca98eeb4a50b4e9a65b484f7"} Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.569849 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv46m" event={"ID":"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2","Type":"ContainerDied","Data":"27d629500b0dd1abf484f3a89bad9c432c4ba1d67d2523e09a86d87aff25f1c0"} Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.569707 4721 generic.go:334] "Generic (PLEG): container finished" podID="be9ad0b8-eef7-451f-82b9-1b5cc54c63c2" containerID="27d629500b0dd1abf484f3a89bad9c432c4ba1d67d2523e09a86d87aff25f1c0" exitCode=0 Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.573967 4721 generic.go:334] "Generic (PLEG): container finished" podID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerID="cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc" exitCode=0 Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.574023 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerDied","Data":"cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc"} Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.576876 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerID="cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6" exitCode=0 Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.576898 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerDied","Data":"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6"} Feb 02 13:07:17 crc kubenswrapper[4721]: I0202 13:07:17.614739 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c64xc" podStartSLOduration=2.959321091 podStartE2EDuration="5.614715549s" podCreationTimestamp="2026-02-02 13:07:12 +0000 UTC" firstStartedPulling="2026-02-02 13:07:14.536426484 +0000 UTC m=+374.838940873" lastFinishedPulling="2026-02-02 13:07:17.191820932 +0000 UTC m=+377.494335331" observedRunningTime="2026-02-02 13:07:17.593337556 +0000 UTC m=+377.895851965" watchObservedRunningTime="2026-02-02 13:07:17.614715549 +0000 UTC m=+377.917229948" Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.589160 4721 generic.go:334] "Generic (PLEG): container finished" podID="be9ad0b8-eef7-451f-82b9-1b5cc54c63c2" containerID="b13058d7a73183fa9d3319bc03fdee0b0f77b3a7ccd547efe16bc18ab7a6b684" exitCode=0 Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.589218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv46m" event={"ID":"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2","Type":"ContainerDied","Data":"b13058d7a73183fa9d3319bc03fdee0b0f77b3a7ccd547efe16bc18ab7a6b684"} Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.594184 4721 generic.go:334] "Generic (PLEG): container finished" podID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerID="6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f" exitCode=0 Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.594331 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerDied","Data":"6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f"} Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.600137 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerStarted","Data":"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a"} Feb 02 13:07:19 crc kubenswrapper[4721]: I0202 13:07:19.636168 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gc4db" podStartSLOduration=2.688735166 podStartE2EDuration="6.636152451s" podCreationTimestamp="2026-02-02 13:07:13 +0000 UTC" firstStartedPulling="2026-02-02 13:07:14.539830819 +0000 UTC m=+374.842345208" lastFinishedPulling="2026-02-02 13:07:18.487248104 +0000 UTC m=+378.789762493" observedRunningTime="2026-02-02 13:07:19.63503319 +0000 UTC m=+379.937547589" watchObservedRunningTime="2026-02-02 13:07:19.636152451 +0000 UTC m=+379.938666840" Feb 02 13:07:20 crc kubenswrapper[4721]: I0202 13:07:20.608199 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv46m" event={"ID":"be9ad0b8-eef7-451f-82b9-1b5cc54c63c2","Type":"ContainerStarted","Data":"aee779ee53582607f4f45862acaef80e8e030a57d723aa45959d7fc3dc54b957"} Feb 02 13:07:20 crc kubenswrapper[4721]: I0202 13:07:20.618889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerStarted","Data":"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87"} Feb 02 13:07:20 crc kubenswrapper[4721]: I0202 13:07:20.642335 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kv46m" podStartSLOduration=3.254529645 podStartE2EDuration="5.642318255s" podCreationTimestamp="2026-02-02 13:07:15 +0000 UTC" firstStartedPulling="2026-02-02 13:07:17.570988745 +0000 UTC m=+377.873503134" lastFinishedPulling="2026-02-02 13:07:19.958777365 +0000 UTC m=+380.261291744" observedRunningTime="2026-02-02 13:07:20.636314779 +0000 UTC m=+380.938829168" watchObservedRunningTime="2026-02-02 13:07:20.642318255 +0000 UTC m=+380.944832664" Feb 02 13:07:20 crc kubenswrapper[4721]: I0202 13:07:20.657620 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w5wlg" podStartSLOduration=3.216043257 podStartE2EDuration="5.657583309s" podCreationTimestamp="2026-02-02 13:07:15 +0000 UTC" firstStartedPulling="2026-02-02 13:07:17.575914631 +0000 UTC m=+377.878429020" lastFinishedPulling="2026-02-02 13:07:20.017454683 +0000 UTC m=+380.319969072" observedRunningTime="2026-02-02 13:07:20.652129808 +0000 UTC m=+380.954644207" watchObservedRunningTime="2026-02-02 13:07:20.657583309 +0000 UTC m=+380.960097708" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.344874 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.345255 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.385978 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.548361 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.548428 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:23 crc kubenswrapper[4721]: I0202 13:07:23.704801 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c64xc" Feb 02 13:07:24 crc kubenswrapper[4721]: I0202 13:07:24.608192 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gc4db" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="registry-server" probeResult="failure" output=< Feb 02 13:07:24 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:07:24 crc kubenswrapper[4721]: > Feb 02 13:07:25 crc kubenswrapper[4721]: I0202 13:07:25.737755 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:25 crc kubenswrapper[4721]: I0202 13:07:25.738596 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:25 crc kubenswrapper[4721]: I0202 13:07:25.785375 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:25 crc kubenswrapper[4721]: I0202 13:07:25.967718 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:25 crc kubenswrapper[4721]: I0202 13:07:25.969099 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:26 crc kubenswrapper[4721]: I0202 13:07:26.005444 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:26 crc kubenswrapper[4721]: I0202 13:07:26.682805 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:07:26 crc kubenswrapper[4721]: I0202 13:07:26.683418 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kv46m" Feb 02 13:07:33 crc kubenswrapper[4721]: I0202 13:07:33.144305 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" podUID="e5f7f80a-15ef-47b9-9e1e-325066df7897" containerName="registry" containerID="cri-o://0bd3bc5f864672ee0d853f714bcf1118de69c8c71858f5b65ce239a92ed34811" gracePeriod=30 Feb 02 13:07:33 crc kubenswrapper[4721]: I0202 13:07:33.590019 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:33 crc kubenswrapper[4721]: I0202 13:07:33.629253 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:07:33 crc kubenswrapper[4721]: I0202 13:07:33.711033 4721 generic.go:334] "Generic (PLEG): container finished" podID="e5f7f80a-15ef-47b9-9e1e-325066df7897" containerID="0bd3bc5f864672ee0d853f714bcf1118de69c8c71858f5b65ce239a92ed34811" exitCode=0 Feb 02 13:07:33 crc kubenswrapper[4721]: I0202 13:07:33.711114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" event={"ID":"e5f7f80a-15ef-47b9-9e1e-325066df7897","Type":"ContainerDied","Data":"0bd3bc5f864672ee0d853f714bcf1118de69c8c71858f5b65ce239a92ed34811"} Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.071978 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210470 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210578 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210614 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwm4n\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210650 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210704 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210861 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210889 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.210931 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token\") pod \"e5f7f80a-15ef-47b9-9e1e-325066df7897\" (UID: \"e5f7f80a-15ef-47b9-9e1e-325066df7897\") " Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.211766 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.211789 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.212614 4721 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.212653 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5f7f80a-15ef-47b9-9e1e-325066df7897-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.217191 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n" (OuterVolumeSpecName: "kube-api-access-rwm4n") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "kube-api-access-rwm4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.217991 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.217513 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.219305 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.236053 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.237949 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e5f7f80a-15ef-47b9-9e1e-325066df7897" (UID: "e5f7f80a-15ef-47b9-9e1e-325066df7897"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.313596 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwm4n\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-kube-api-access-rwm4n\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.313638 4721 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5f7f80a-15ef-47b9-9e1e-325066df7897-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.313652 4721 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.313664 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5f7f80a-15ef-47b9-9e1e-325066df7897-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.313674 4721 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5f7f80a-15ef-47b9-9e1e-325066df7897-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.717609 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" event={"ID":"e5f7f80a-15ef-47b9-9e1e-325066df7897","Type":"ContainerDied","Data":"05e1b0050534ad29187ccb842c7d704d41289dc2c02dfe3c8fae4b1bff20a647"} Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.718061 4721 scope.go:117] "RemoveContainer" containerID="0bd3bc5f864672ee0d853f714bcf1118de69c8c71858f5b65ce239a92ed34811" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.717891 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlhhk" Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.734040 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:07:34 crc kubenswrapper[4721]: I0202 13:07:34.738521 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlhhk"] Feb 02 13:07:36 crc kubenswrapper[4721]: I0202 13:07:36.416477 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f7f80a-15ef-47b9-9e1e-325066df7897" path="/var/lib/kubelet/pods/e5f7f80a-15ef-47b9-9e1e-325066df7897/volumes" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.957227 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg"] Feb 02 13:07:42 crc kubenswrapper[4721]: E0202 13:07:42.957462 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f7f80a-15ef-47b9-9e1e-325066df7897" containerName="registry" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.957473 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f7f80a-15ef-47b9-9e1e-325066df7897" containerName="registry" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.957558 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f7f80a-15ef-47b9-9e1e-325066df7897" containerName="registry" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.957898 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.961051 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.961207 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.961259 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.964997 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.965425 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 02 13:07:42 crc kubenswrapper[4721]: I0202 13:07:42.971478 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg"] Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.130472 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298jq\" (UniqueName: \"kubernetes.io/projected/5de3bc6e-7b95-472f-9f28-84414fa8e54f-kube-api-access-298jq\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.130515 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5de3bc6e-7b95-472f-9f28-84414fa8e54f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.130571 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5de3bc6e-7b95-472f-9f28-84414fa8e54f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.231485 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5de3bc6e-7b95-472f-9f28-84414fa8e54f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.231557 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298jq\" (UniqueName: \"kubernetes.io/projected/5de3bc6e-7b95-472f-9f28-84414fa8e54f-kube-api-access-298jq\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.231582 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5de3bc6e-7b95-472f-9f28-84414fa8e54f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.232658 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5de3bc6e-7b95-472f-9f28-84414fa8e54f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.239212 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5de3bc6e-7b95-472f-9f28-84414fa8e54f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.254018 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298jq\" (UniqueName: \"kubernetes.io/projected/5de3bc6e-7b95-472f-9f28-84414fa8e54f-kube-api-access-298jq\") pod \"cluster-monitoring-operator-6d5b84845-z4lcg\" (UID: \"5de3bc6e-7b95-472f-9f28-84414fa8e54f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.277648 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.728060 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg"] Feb 02 13:07:43 crc kubenswrapper[4721]: I0202 13:07:43.761569 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" event={"ID":"5de3bc6e-7b95-472f-9f28-84414fa8e54f","Type":"ContainerStarted","Data":"dafcff591e88db5db9959a8fb275dae4293f8e5f2446448eb2a2c1af2443abc5"} Feb 02 13:07:44 crc kubenswrapper[4721]: I0202 13:07:44.763694 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:07:44 crc kubenswrapper[4721]: I0202 13:07:44.764662 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:07:46 crc kubenswrapper[4721]: I0202 13:07:46.781451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" event={"ID":"5de3bc6e-7b95-472f-9f28-84414fa8e54f","Type":"ContainerStarted","Data":"0cc61598d53f44af4ac9dd5a6d6d9feb6b77c30d6bfacc5275df7e8e95f7394b"} Feb 02 13:07:46 crc kubenswrapper[4721]: I0202 13:07:46.796658 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-z4lcg" podStartSLOduration=2.212937993 podStartE2EDuration="4.796634878s" podCreationTimestamp="2026-02-02 13:07:42 +0000 UTC" firstStartedPulling="2026-02-02 13:07:43.745616142 +0000 UTC m=+404.048130531" lastFinishedPulling="2026-02-02 13:07:46.329313027 +0000 UTC m=+406.631827416" observedRunningTime="2026-02-02 13:07:46.793297216 +0000 UTC m=+407.095811605" watchObservedRunningTime="2026-02-02 13:07:46.796634878 +0000 UTC m=+407.099149257" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.083399 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm"] Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.084298 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.088249 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8k4jz" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.088600 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.099604 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm"] Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.195428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a960c725-d05f-4af4-bf9b-aee9a8e8ffbe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dh4qm\" (UID: \"a960c725-d05f-4af4-bf9b-aee9a8e8ffbe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.296376 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a960c725-d05f-4af4-bf9b-aee9a8e8ffbe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dh4qm\" (UID: \"a960c725-d05f-4af4-bf9b-aee9a8e8ffbe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.302914 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a960c725-d05f-4af4-bf9b-aee9a8e8ffbe-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-dh4qm\" (UID: \"a960c725-d05f-4af4-bf9b-aee9a8e8ffbe\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.401658 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:47 crc kubenswrapper[4721]: I0202 13:07:47.801128 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm"] Feb 02 13:07:48 crc kubenswrapper[4721]: I0202 13:07:48.797125 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" event={"ID":"a960c725-d05f-4af4-bf9b-aee9a8e8ffbe","Type":"ContainerStarted","Data":"4931697aae06df67eeb7ddf8ecdac0170e6c1491c1d06241b0757ab36570eff4"} Feb 02 13:07:51 crc kubenswrapper[4721]: I0202 13:07:51.823843 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" event={"ID":"a960c725-d05f-4af4-bf9b-aee9a8e8ffbe","Type":"ContainerStarted","Data":"e30334e8e451cfe04ca398502eac04e5a2354cfaf54fcdef7ee593b7844b0b85"} Feb 02 13:07:51 crc kubenswrapper[4721]: I0202 13:07:51.824243 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:51 crc kubenswrapper[4721]: I0202 13:07:51.829908 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" Feb 02 13:07:51 crc kubenswrapper[4721]: I0202 13:07:51.861671 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-dh4qm" podStartSLOduration=1.882668263 podStartE2EDuration="4.861646495s" podCreationTimestamp="2026-02-02 13:07:47 +0000 UTC" firstStartedPulling="2026-02-02 13:07:47.806681307 +0000 UTC m=+408.109195686" lastFinishedPulling="2026-02-02 13:07:50.785659529 +0000 UTC m=+411.088173918" observedRunningTime="2026-02-02 13:07:51.844471629 +0000 UTC m=+412.146986018" watchObservedRunningTime="2026-02-02 13:07:51.861646495 +0000 UTC m=+412.164160904" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.139341 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-j2fmj"] Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.140391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.142363 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.142534 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-8js9g" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.142782 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.143740 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.149653 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-j2fmj"] Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.263344 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.263420 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.263482 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.263507 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzb2\" (UniqueName: \"kubernetes.io/projected/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-kube-api-access-tnzb2\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.365279 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.365337 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzb2\" (UniqueName: \"kubernetes.io/projected/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-kube-api-access-tnzb2\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.365391 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.365442 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.367191 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-metrics-client-ca\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.374368 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.376125 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.385584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzb2\" (UniqueName: \"kubernetes.io/projected/c45de1fe-3902-4bc0-8cf6-0b3312c92c9e-kube-api-access-tnzb2\") pod \"prometheus-operator-db54df47d-j2fmj\" (UID: \"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e\") " pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.458599 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" Feb 02 13:07:52 crc kubenswrapper[4721]: I0202 13:07:52.844962 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-j2fmj"] Feb 02 13:07:52 crc kubenswrapper[4721]: W0202 13:07:52.850622 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45de1fe_3902_4bc0_8cf6_0b3312c92c9e.slice/crio-ed6e64b27ff7a41568c005a31411f40b742531151ee0f85fdaf390e9ed43b35f WatchSource:0}: Error finding container ed6e64b27ff7a41568c005a31411f40b742531151ee0f85fdaf390e9ed43b35f: Status 404 returned error can't find the container with id ed6e64b27ff7a41568c005a31411f40b742531151ee0f85fdaf390e9ed43b35f Feb 02 13:07:53 crc kubenswrapper[4721]: I0202 13:07:53.850712 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" event={"ID":"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e","Type":"ContainerStarted","Data":"ed6e64b27ff7a41568c005a31411f40b742531151ee0f85fdaf390e9ed43b35f"} Feb 02 13:07:57 crc kubenswrapper[4721]: I0202 13:07:57.958138 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" event={"ID":"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e","Type":"ContainerStarted","Data":"f1444fdd436c5b8f212389e79bb38d2955fc2e11a97aa2a8f54c6548f061ecc1"} Feb 02 13:07:57 crc kubenswrapper[4721]: I0202 13:07:57.958732 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" event={"ID":"c45de1fe-3902-4bc0-8cf6-0b3312c92c9e","Type":"ContainerStarted","Data":"99f8574cf722fb3b223e8376e11c47089ae97133d665376b3b8176d028548f9e"} Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.522020 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-j2fmj" podStartSLOduration=4.970323901 podStartE2EDuration="7.522005129s" podCreationTimestamp="2026-02-02 13:07:52 +0000 UTC" firstStartedPulling="2026-02-02 13:07:52.853253494 +0000 UTC m=+413.155767883" lastFinishedPulling="2026-02-02 13:07:55.404934712 +0000 UTC m=+415.707449111" observedRunningTime="2026-02-02 13:07:57.975540135 +0000 UTC m=+418.278054524" watchObservedRunningTime="2026-02-02 13:07:59.522005129 +0000 UTC m=+419.824519518" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.523432 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kfp42"] Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.524326 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.528247 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.529339 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.529388 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-xn7ll" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.558880 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kfp42"] Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.561728 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76sw\" (UniqueName: \"kubernetes.io/projected/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-kube-api-access-t76sw\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.561802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.561832 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.561873 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.611977 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk"] Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.613113 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.615986 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.618468 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jgvh5" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.618623 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.621150 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.638288 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk"] Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.643563 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-tdqc9"] Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.644847 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.646669 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.646977 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-v45hk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.648915 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76sw\" (UniqueName: \"kubernetes.io/projected/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-kube-api-access-t76sw\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663228 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-sys\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663253 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-root\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663280 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663303 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663325 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663351 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663371 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gz5\" (UniqueName: \"kubernetes.io/projected/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-kube-api-access-99gz5\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663401 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91030360-19d6-44e9-b7a3-4662fd652353-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-textfile\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663495 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-wtmp\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.663628 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.664484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.664587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.664646 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.664989 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.665057 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.665102 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-metrics-client-ca\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.665157 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n994k\" (UniqueName: \"kubernetes.io/projected/91030360-19d6-44e9-b7a3-4662fd652353-kube-api-access-n994k\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.672702 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.673056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.683806 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76sw\" (UniqueName: \"kubernetes.io/projected/ba42cfe6-f3df-45f0-ab80-4781ce41b9a8-kube-api-access-t76sw\") pod \"openshift-state-metrics-566fddb674-kfp42\" (UID: \"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.766537 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.766596 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-metrics-client-ca\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: E0202 13:07:59.766731 4721 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 02 13:07:59 crc kubenswrapper[4721]: E0202 13:07:59.766810 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls podName:b2e082e0-4057-4d2e-bcb3-dc5286f0f705 nodeName:}" failed. No retries permitted until 2026-02-02 13:08:00.266788257 +0000 UTC m=+420.569302646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls") pod "node-exporter-tdqc9" (UID: "b2e082e0-4057-4d2e-bcb3-dc5286f0f705") : secret "node-exporter-tls" not found Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767510 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-metrics-client-ca\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767572 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n994k\" (UniqueName: \"kubernetes.io/projected/91030360-19d6-44e9-b7a3-4662fd652353-kube-api-access-n994k\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767633 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-sys\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767715 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-sys\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767754 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-root\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767785 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767839 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.767853 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-root\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.768669 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.768818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99gz5\" (UniqueName: \"kubernetes.io/projected/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-kube-api-access-99gz5\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769234 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91030360-19d6-44e9-b7a3-4662fd652353-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769287 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-textfile\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769314 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-wtmp\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769331 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91030360-19d6-44e9-b7a3-4662fd652353-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769386 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769412 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769953 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769806 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-textfile\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.769548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-wtmp\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.770676 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91030360-19d6-44e9-b7a3-4662fd652353-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.773686 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.774355 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.777605 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91030360-19d6-44e9-b7a3-4662fd652353-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.786872 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n994k\" (UniqueName: \"kubernetes.io/projected/91030360-19d6-44e9-b7a3-4662fd652353-kube-api-access-n994k\") pod \"kube-state-metrics-777cb5bd5d-trwbk\" (UID: \"91030360-19d6-44e9-b7a3-4662fd652353\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.793997 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gz5\" (UniqueName: \"kubernetes.io/projected/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-kube-api-access-99gz5\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.841116 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" Feb 02 13:07:59 crc kubenswrapper[4721]: I0202 13:07:59.928649 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.235350 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kfp42"] Feb 02 13:08:00 crc kubenswrapper[4721]: W0202 13:08:00.245499 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba42cfe6_f3df_45f0_ab80_4781ce41b9a8.slice/crio-c6366f9b29d099585dc8ac4669e3ea8513fed89fc254a90ae06281b0dd22e2c6 WatchSource:0}: Error finding container c6366f9b29d099585dc8ac4669e3ea8513fed89fc254a90ae06281b0dd22e2c6: Status 404 returned error can't find the container with id c6366f9b29d099585dc8ac4669e3ea8513fed89fc254a90ae06281b0dd22e2c6 Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.275603 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.281308 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b2e082e0-4057-4d2e-bcb3-dc5286f0f705-node-exporter-tls\") pod \"node-exporter-tdqc9\" (UID: \"b2e082e0-4057-4d2e-bcb3-dc5286f0f705\") " pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.454452 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk"] Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.558956 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-v45hk" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.567627 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-tdqc9" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.633933 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.635949 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.646016 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.646959 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.646959 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pn5xs" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.647725 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.648034 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.648200 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.648378 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.648542 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.657224 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.676872 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.689759 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690185 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690211 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690254 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-config-out\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690272 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690290 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690308 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690330 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-web-config\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690349 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690432 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgcw\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-kube-api-access-rbgcw\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.690494 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791546 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791577 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791609 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-config-out\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791629 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791671 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791696 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-web-config\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791721 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgcw\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-kube-api-access-rbgcw\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791770 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.791805 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.792182 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.792824 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.793048 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/457655c2-194f-4643-804e-1024580bb2dc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.796856 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-config-volume\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.796915 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.797711 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.799088 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-web-config\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.799399 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.808521 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgcw\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-kube-api-access-rbgcw\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.811563 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/457655c2-194f-4643-804e-1024580bb2dc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.811562 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457655c2-194f-4643-804e-1024580bb2dc-config-out\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.811672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457655c2-194f-4643-804e-1024580bb2dc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"457655c2-194f-4643-804e-1024580bb2dc\") " pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.963383 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.990126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" event={"ID":"91030360-19d6-44e9-b7a3-4662fd652353","Type":"ContainerStarted","Data":"3b8cbc931059e834864157637562a5feee03a982e922586a7e59170d1d27518d"} Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.992832 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tdqc9" event={"ID":"b2e082e0-4057-4d2e-bcb3-dc5286f0f705","Type":"ContainerStarted","Data":"0928c5a0d605f5581fd8312d1bfddd598456ab35fce910457efbe9de99d72eaa"} Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.995952 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" event={"ID":"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8","Type":"ContainerStarted","Data":"1b7bf1966c3b128bd1483a149044887fe3a950e5d3fd087b17fb31775c9aac80"} Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.996006 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" event={"ID":"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8","Type":"ContainerStarted","Data":"ac1a62757b7b19989fc42bc889c03f7a6ad0d02af641d6914d8d0df3947cb103"} Feb 02 13:08:00 crc kubenswrapper[4721]: I0202 13:08:00.996022 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" event={"ID":"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8","Type":"ContainerStarted","Data":"c6366f9b29d099585dc8ac4669e3ea8513fed89fc254a90ae06281b0dd22e2c6"} Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.356749 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 02 13:08:01 crc kubenswrapper[4721]: W0202 13:08:01.365000 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457655c2_194f_4643_804e_1024580bb2dc.slice/crio-c9c55688364b12f13c1729955131cdddc7e10c6fbe6221be9d4e4748c5d571d9 WatchSource:0}: Error finding container c9c55688364b12f13c1729955131cdddc7e10c6fbe6221be9d4e4748c5d571d9: Status 404 returned error can't find the container with id c9c55688364b12f13c1729955131cdddc7e10c6fbe6221be9d4e4748c5d571d9 Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.483171 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c45b586c-w7sw7"] Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.485053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489668 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489705 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489859 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489896 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-qhqzb" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489946 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.489976 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-94jvo0rjqi4tt" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.491103 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.492771 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c45b586c-w7sw7"] Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-metrics-client-ca\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499809 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499871 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499894 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499947 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.499976 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-grpc-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.500006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vh7p\" (UniqueName: \"kubernetes.io/projected/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-kube-api-access-4vh7p\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.500035 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.601755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.604670 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.604744 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-grpc-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.605443 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vh7p\" (UniqueName: \"kubernetes.io/projected/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-kube-api-access-4vh7p\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.605508 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.605548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-metrics-client-ca\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.605604 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.606360 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-metrics-client-ca\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.607161 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.609680 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.610113 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.610387 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-grpc-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.610537 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-tls\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.611177 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.613429 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.626483 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vh7p\" (UniqueName: \"kubernetes.io/projected/8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6-kube-api-access-4vh7p\") pod \"thanos-querier-c45b586c-w7sw7\" (UID: \"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6\") " pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:01 crc kubenswrapper[4721]: I0202 13:08:01.811796 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:02 crc kubenswrapper[4721]: I0202 13:08:02.007313 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"c9c55688364b12f13c1729955131cdddc7e10c6fbe6221be9d4e4748c5d571d9"} Feb 02 13:08:02 crc kubenswrapper[4721]: I0202 13:08:02.995720 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c45b586c-w7sw7"] Feb 02 13:08:03 crc kubenswrapper[4721]: I0202 13:08:03.020775 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tdqc9" event={"ID":"b2e082e0-4057-4d2e-bcb3-dc5286f0f705","Type":"ContainerStarted","Data":"a97cf8c7cb8dac71f8d7d6343b9b5bec8afc00f050646bb4233eda1ef42fc649"} Feb 02 13:08:03 crc kubenswrapper[4721]: I0202 13:08:03.025165 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" event={"ID":"ba42cfe6-f3df-45f0-ab80-4781ce41b9a8","Type":"ContainerStarted","Data":"c85ea78c3369ef1e922e98597dddfe15d49464d701a13c80f730b8e2a9a14b09"} Feb 02 13:08:03 crc kubenswrapper[4721]: I0202 13:08:03.027345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" event={"ID":"91030360-19d6-44e9-b7a3-4662fd652353","Type":"ContainerStarted","Data":"57c494a76b8cf0bedc571ecbbb7f33e403a2afdd15b518dff1bf230652468193"} Feb 02 13:08:03 crc kubenswrapper[4721]: I0202 13:08:03.061594 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kfp42" podStartSLOduration=2.146920806 podStartE2EDuration="4.061545814s" podCreationTimestamp="2026-02-02 13:07:59 +0000 UTC" firstStartedPulling="2026-02-02 13:08:00.667263114 +0000 UTC m=+420.969777503" lastFinishedPulling="2026-02-02 13:08:02.581888122 +0000 UTC m=+422.884402511" observedRunningTime="2026-02-02 13:08:03.056095813 +0000 UTC m=+423.358610212" watchObservedRunningTime="2026-02-02 13:08:03.061545814 +0000 UTC m=+423.364060233" Feb 02 13:08:03 crc kubenswrapper[4721]: W0202 13:08:03.293284 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e14e0ea_313a_415a_b59f_b3f1a6f1c7a6.slice/crio-94543e454ffd29ff1ae9db295669e38cb1c120486d362a006003075eb52744a2 WatchSource:0}: Error finding container 94543e454ffd29ff1ae9db295669e38cb1c120486d362a006003075eb52744a2: Status 404 returned error can't find the container with id 94543e454ffd29ff1ae9db295669e38cb1c120486d362a006003075eb52744a2 Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.040779 4721 generic.go:334] "Generic (PLEG): container finished" podID="b2e082e0-4057-4d2e-bcb3-dc5286f0f705" containerID="a97cf8c7cb8dac71f8d7d6343b9b5bec8afc00f050646bb4233eda1ef42fc649" exitCode=0 Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.041244 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tdqc9" event={"ID":"b2e082e0-4057-4d2e-bcb3-dc5286f0f705","Type":"ContainerDied","Data":"a97cf8c7cb8dac71f8d7d6343b9b5bec8afc00f050646bb4233eda1ef42fc649"} Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.042987 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"94543e454ffd29ff1ae9db295669e38cb1c120486d362a006003075eb52744a2"} Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.046690 4721 generic.go:334] "Generic (PLEG): container finished" podID="457655c2-194f-4643-804e-1024580bb2dc" containerID="1634746a7e0ac546432275ab3f0289d8acf1441a62c70fe647ce7766ffd4d59a" exitCode=0 Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.046890 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerDied","Data":"1634746a7e0ac546432275ab3f0289d8acf1441a62c70fe647ce7766ffd4d59a"} Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.049635 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" event={"ID":"91030360-19d6-44e9-b7a3-4662fd652353","Type":"ContainerStarted","Data":"9d78edb12edaa4134f3d5f4ff0235d875316cbf6cdbec4bf5987343405d9776c"} Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.049675 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" event={"ID":"91030360-19d6-44e9-b7a3-4662fd652353","Type":"ContainerStarted","Data":"3f637eb1833ab57f426ced4edbe770701e8f1b08b20747c8420793731a2e0e06"} Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.112228 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-trwbk" podStartSLOduration=2.9374809859999997 podStartE2EDuration="5.112206228s" podCreationTimestamp="2026-02-02 13:07:59 +0000 UTC" firstStartedPulling="2026-02-02 13:08:00.45843594 +0000 UTC m=+420.760950329" lastFinishedPulling="2026-02-02 13:08:02.633161182 +0000 UTC m=+422.935675571" observedRunningTime="2026-02-02 13:08:04.108098404 +0000 UTC m=+424.410612803" watchObservedRunningTime="2026-02-02 13:08:04.112206228 +0000 UTC m=+424.414720617" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.275594 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.276815 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.303536 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.372710 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.372802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.372825 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.372844 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.372878 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st584\" (UniqueName: \"kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.373031 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.373136 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474553 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474621 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474720 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474764 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474791 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st584\" (UniqueName: \"kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.474830 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.475888 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.476285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.476321 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.477214 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.484033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.489232 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.501047 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st584\" (UniqueName: \"kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584\") pod \"console-56c598d788-j9gjl\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.603997 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.818989 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-67df565f78-vmr8d"] Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.820529 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823338 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823358 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823548 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823587 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cm94e6tf5h16h" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823731 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zh6bg" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.823871 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.872709 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67df565f78-vmr8d"] Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtj2m\" (UniqueName: \"kubernetes.io/projected/4b390361-5a0f-423e-856c-dc0e11c32afa-kube-api-access-gtj2m\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881108 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-client-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881129 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881163 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b390361-5a0f-423e-856c-dc0e11c32afa-audit-log\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881187 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-metrics-server-audit-profiles\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881209 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-server-tls\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.881247 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-client-certs\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982248 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b390361-5a0f-423e-856c-dc0e11c32afa-audit-log\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982319 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-metrics-server-audit-profiles\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-server-tls\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982413 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-client-certs\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982448 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtj2m\" (UniqueName: \"kubernetes.io/projected/4b390361-5a0f-423e-856c-dc0e11c32afa-kube-api-access-gtj2m\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982484 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982503 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-client-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.982865 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b390361-5a0f-423e-856c-dc0e11c32afa-audit-log\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.984961 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.985646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b390361-5a0f-423e-856c-dc0e11c32afa-metrics-server-audit-profiles\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.988481 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-client-ca-bundle\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.988553 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-client-certs\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:04 crc kubenswrapper[4721]: I0202 13:08:04.989009 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b390361-5a0f-423e-856c-dc0e11c32afa-secret-metrics-server-tls\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.001508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtj2m\" (UniqueName: \"kubernetes.io/projected/4b390361-5a0f-423e-856c-dc0e11c32afa-kube-api-access-gtj2m\") pod \"metrics-server-67df565f78-vmr8d\" (UID: \"4b390361-5a0f-423e-856c-dc0e11c32afa\") " pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.005756 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.061298 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tdqc9" event={"ID":"b2e082e0-4057-4d2e-bcb3-dc5286f0f705","Type":"ContainerStarted","Data":"61132e4046d20b2beecf9834c1430c15fe9b1be51c120c430351662cc889f7a0"} Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.061349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-tdqc9" event={"ID":"b2e082e0-4057-4d2e-bcb3-dc5286f0f705","Type":"ContainerStarted","Data":"70b2191bafae7d0c557bb2d625421fdd4a5fddb420b7d6052056206a5ccc8038"} Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.090714 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-tdqc9" podStartSLOduration=4.059647081 podStartE2EDuration="6.090694864s" podCreationTimestamp="2026-02-02 13:07:59 +0000 UTC" firstStartedPulling="2026-02-02 13:08:00.61333862 +0000 UTC m=+420.915853009" lastFinishedPulling="2026-02-02 13:08:02.644386403 +0000 UTC m=+422.946900792" observedRunningTime="2026-02-02 13:08:05.080584504 +0000 UTC m=+425.383098913" watchObservedRunningTime="2026-02-02 13:08:05.090694864 +0000 UTC m=+425.393209253" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.139414 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.280944 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7"] Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.283865 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.286744 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.288223 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.289002 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7"] Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.387789 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf61d6b4-c01d-487e-bc9d-63d6b7654ce8-monitoring-plugin-cert\") pod \"monitoring-plugin-f7c547bd9-pxbl7\" (UID: \"bf61d6b4-c01d-487e-bc9d-63d6b7654ce8\") " pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.488815 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf61d6b4-c01d-487e-bc9d-63d6b7654ce8-monitoring-plugin-cert\") pod \"monitoring-plugin-f7c547bd9-pxbl7\" (UID: \"bf61d6b4-c01d-487e-bc9d-63d6b7654ce8\") " pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.492625 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf61d6b4-c01d-487e-bc9d-63d6b7654ce8-monitoring-plugin-cert\") pod \"monitoring-plugin-f7c547bd9-pxbl7\" (UID: \"bf61d6b4-c01d-487e-bc9d-63d6b7654ce8\") " pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.607217 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:05 crc kubenswrapper[4721]: W0202 13:08:05.756872 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod848aa930_7630_4eed_b114_23853a30daac.slice/crio-fcb6049175d618686bd24850fd66d3afd1a92f853e4d4ca21439dea1da41c941 WatchSource:0}: Error finding container fcb6049175d618686bd24850fd66d3afd1a92f853e4d4ca21439dea1da41c941: Status 404 returned error can't find the container with id fcb6049175d618686bd24850fd66d3afd1a92f853e4d4ca21439dea1da41c941 Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.829203 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.831825 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.834572 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.837589 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.837865 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-fx2vn" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.837962 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.838115 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.838185 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.838270 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-5n4f4oh7gp52d" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.838471 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.838573 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.839329 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.840850 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.848372 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.856760 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.862363 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.897733 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.897889 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.897955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.897980 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898100 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898143 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898176 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898200 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898217 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-web-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898242 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898273 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898601 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898624 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rtg\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-kube-api-access-z7rtg\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898660 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898709 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config-out\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898755 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.898837 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.999826 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:05 crc kubenswrapper[4721]: I0202 13:08:05.999878 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:05.999902 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-web-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:05.999922 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:05.999941 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:05.999971 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:05.999986 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rtg\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-kube-api-access-z7rtg\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000023 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config-out\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000041 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000058 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000099 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000143 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000165 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000180 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000210 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000228 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.000981 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.001369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.002033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.006775 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.006786 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.006918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.007560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.009220 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010081 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010214 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010475 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010498 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.010568 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.011900 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-config-out\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.013166 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.022766 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-web-config\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.027980 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rtg\" (UniqueName: \"kubernetes.io/projected/6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49-kube-api-access-z7rtg\") pod \"prometheus-k8s-0\" (UID: \"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.068862 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c598d788-j9gjl" event={"ID":"848aa930-7630-4eed-b114-23853a30daac","Type":"ContainerStarted","Data":"fcb6049175d618686bd24850fd66d3afd1a92f853e4d4ca21439dea1da41c941"} Feb 02 13:08:06 crc kubenswrapper[4721]: I0202 13:08:06.150276 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.022693 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67df565f78-vmr8d"] Feb 02 13:08:07 crc kubenswrapper[4721]: W0202 13:08:07.031016 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b390361_5a0f_423e_856c_dc0e11c32afa.slice/crio-502db066fbe14662fbccb4b34c2e02e4a16454b2d8f5a88b36628f40bbb90b1a WatchSource:0}: Error finding container 502db066fbe14662fbccb4b34c2e02e4a16454b2d8f5a88b36628f40bbb90b1a: Status 404 returned error can't find the container with id 502db066fbe14662fbccb4b34c2e02e4a16454b2d8f5a88b36628f40bbb90b1a Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.079671 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c598d788-j9gjl" event={"ID":"848aa930-7630-4eed-b114-23853a30daac","Type":"ContainerStarted","Data":"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.083245 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"6014a97404cf5ad936fba27378735c16a178bbda67ab9b8ee5d23e279778974b"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.083287 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"e5be9ae087a265784a87924481dbf0d70d6cc88c0165438f8f649e806c47ce68"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.089926 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" event={"ID":"4b390361-5a0f-423e-856c-dc0e11c32afa","Type":"ContainerStarted","Data":"502db066fbe14662fbccb4b34c2e02e4a16454b2d8f5a88b36628f40bbb90b1a"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.093426 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"3b95b2e35c0e742e5d024aecd741781a45ed671ce2a62f655e84b5058c145d03"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.093468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"bf0a86d1164c453faf37d61e4a79446e93c9dba37cb16d4b6fe5e036b7743d6e"} Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.100527 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7"] Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.167916 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56c598d788-j9gjl" podStartSLOduration=3.167892334 podStartE2EDuration="3.167892334s" podCreationTimestamp="2026-02-02 13:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:08:07.108740246 +0000 UTC m=+427.411254645" watchObservedRunningTime="2026-02-02 13:08:07.167892334 +0000 UTC m=+427.470406723" Feb 02 13:08:07 crc kubenswrapper[4721]: I0202 13:08:07.170239 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 02 13:08:07 crc kubenswrapper[4721]: W0202 13:08:07.178253 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fdb4e96_b41b_4d7e_8a97_c0ed2705ed49.slice/crio-f5d0b5dc0a903cd81476b7fca9028d99ceb6595f223e4e576b99ebc2e7e736ac WatchSource:0}: Error finding container f5d0b5dc0a903cd81476b7fca9028d99ceb6595f223e4e576b99ebc2e7e736ac: Status 404 returned error can't find the container with id f5d0b5dc0a903cd81476b7fca9028d99ceb6595f223e4e576b99ebc2e7e736ac Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.108344 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"6f5be74219f20ff773d039749d933e7c2cfc283210466387fdc8b0bb405fd3ee"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.110409 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" event={"ID":"bf61d6b4-c01d-487e-bc9d-63d6b7654ce8","Type":"ContainerStarted","Data":"a0e1e6aaba0fac91908a6a08f3ac88027b1217e86b61ed748f66fa256466235d"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.117807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"e686c72aa34ec5e05d89ec828e5f4d9549b3fba03a08f608dd269fa494337b4e"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.117853 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"dd09cf080dd6a71f2040e0dbe1ea210b2af0170ab715b399c00d8440e164100c"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.117868 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"c4130c3a7ff7eeafacdfd6a229692ab0739ce9e0bb486c1e29d9b9d427354026"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.119704 4721 generic.go:334] "Generic (PLEG): container finished" podID="6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49" containerID="1c818404e0faa701e4e3f47978d2fa944d98d0b6c0d030c1cda3ff2480c99ac1" exitCode=0 Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.119737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerDied","Data":"1c818404e0faa701e4e3f47978d2fa944d98d0b6c0d030c1cda3ff2480c99ac1"} Feb 02 13:08:08 crc kubenswrapper[4721]: I0202 13:08:08.119766 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"f5d0b5dc0a903cd81476b7fca9028d99ceb6595f223e4e576b99ebc2e7e736ac"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.134164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" event={"ID":"bf61d6b4-c01d-487e-bc9d-63d6b7654ce8","Type":"ContainerStarted","Data":"2b0c7f1694516f76b0d9f071addf7def8ca086434f1c73016cb2c92bf9e26304"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.134498 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.139694 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"457655c2-194f-4643-804e-1024580bb2dc","Type":"ContainerStarted","Data":"49f640be331dbb1f8380b34c42bc0c7e161ddced43c334595377ba9590eb0260"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.141203 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" event={"ID":"4b390361-5a0f-423e-856c-dc0e11c32afa","Type":"ContainerStarted","Data":"ccd743f96e31ef403c4d2a2fdec1a59e18855c530b39e212eda186903b39c653"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.143835 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.144880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"a05aca6394662561b1863f8833f90ece68c32e66e41bdcd7a2f8391bf9af2dff"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.144900 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"c34fb08a72cd0bd69ce0ac37e22dd1057bf4b2ffdcd638b51f9dfd275204a829"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.144910 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" event={"ID":"8e14e0ea-313a-415a-b59f-b3f1a6f1c7a6","Type":"ContainerStarted","Data":"7cb9fb54d27010886f81f94f5e5036f880f9c2bfca8528ca0e738eaadde15c2d"} Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.145048 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.152751 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-f7c547bd9-pxbl7" podStartSLOduration=2.959046082 podStartE2EDuration="5.152725268s" podCreationTimestamp="2026-02-02 13:08:05 +0000 UTC" firstStartedPulling="2026-02-02 13:08:07.10816758 +0000 UTC m=+427.410681969" lastFinishedPulling="2026-02-02 13:08:09.301846766 +0000 UTC m=+429.604361155" observedRunningTime="2026-02-02 13:08:10.149050627 +0000 UTC m=+430.451565026" watchObservedRunningTime="2026-02-02 13:08:10.152725268 +0000 UTC m=+430.455239657" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.204731 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.539774546 podStartE2EDuration="10.204705167s" podCreationTimestamp="2026-02-02 13:08:00 +0000 UTC" firstStartedPulling="2026-02-02 13:08:01.367236297 +0000 UTC m=+421.669750686" lastFinishedPulling="2026-02-02 13:08:09.032166878 +0000 UTC m=+429.334681307" observedRunningTime="2026-02-02 13:08:10.200412508 +0000 UTC m=+430.502926917" watchObservedRunningTime="2026-02-02 13:08:10.204705167 +0000 UTC m=+430.507219556" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.222261 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" podStartSLOduration=3.457308395 podStartE2EDuration="9.222243043s" podCreationTimestamp="2026-02-02 13:08:01 +0000 UTC" firstStartedPulling="2026-02-02 13:08:03.295430251 +0000 UTC m=+423.597944640" lastFinishedPulling="2026-02-02 13:08:09.060364899 +0000 UTC m=+429.362879288" observedRunningTime="2026-02-02 13:08:10.221966586 +0000 UTC m=+430.524480975" watchObservedRunningTime="2026-02-02 13:08:10.222243043 +0000 UTC m=+430.524757442" Feb 02 13:08:10 crc kubenswrapper[4721]: I0202 13:08:10.241287 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" podStartSLOduration=3.997268119 podStartE2EDuration="6.241272359s" podCreationTimestamp="2026-02-02 13:08:04 +0000 UTC" firstStartedPulling="2026-02-02 13:08:07.033249205 +0000 UTC m=+427.335763594" lastFinishedPulling="2026-02-02 13:08:09.277253445 +0000 UTC m=+429.579767834" observedRunningTime="2026-02-02 13:08:10.241053354 +0000 UTC m=+430.543567743" watchObservedRunningTime="2026-02-02 13:08:10.241272359 +0000 UTC m=+430.543786748" Feb 02 13:08:11 crc kubenswrapper[4721]: I0202 13:08:11.159972 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c45b586c-w7sw7" Feb 02 13:08:12 crc kubenswrapper[4721]: I0202 13:08:12.162881 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"8034925493acc9a842b3362da631ba2e378e3ea6e53347bfa87f5a9bf5d1f315"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.171859 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"76b4303d16208fc1710fdb661ae395c4a9c0ba9d682c5ef2419bdd5d3f775032"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.171900 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"029730ca64213cc0c7271b36994601e0c25adab46b40896e2ef4be464f4d9813"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.171915 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"edbb97c0330ed4b68671f1be1e988325f2b163687c2abdab120af47ffd68d8cd"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.171929 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"22bd3c7b420c6d7469ebc29d365aa850fb718e42a5c2bed75986f1e0311844f8"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.171940 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6fdb4e96-b41b-4d7e-8a97-c0ed2705ed49","Type":"ContainerStarted","Data":"7482981fa8cc59d4faf4f647912a9f2d95a317750854ed3924c3f6c0e2b62927"} Feb 02 13:08:13 crc kubenswrapper[4721]: I0202 13:08:13.205170 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.445472693 podStartE2EDuration="8.205154703s" podCreationTimestamp="2026-02-02 13:08:05 +0000 UTC" firstStartedPulling="2026-02-02 13:08:08.145595008 +0000 UTC m=+428.448109397" lastFinishedPulling="2026-02-02 13:08:11.905277018 +0000 UTC m=+432.207791407" observedRunningTime="2026-02-02 13:08:13.200886495 +0000 UTC m=+433.503400904" watchObservedRunningTime="2026-02-02 13:08:13.205154703 +0000 UTC m=+433.507669092" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.604950 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.605348 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.610731 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.764667 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.764743 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.764799 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.765638 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:08:14 crc kubenswrapper[4721]: I0202 13:08:14.765747 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe" gracePeriod=600 Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.185734 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe" exitCode=0 Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.185799 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe"} Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.186126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7"} Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.186172 4721 scope.go:117] "RemoveContainer" containerID="142fea12fc056e370e6c152f1358ffca7f5345b290b58801d6ec8ca042262591" Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.192244 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:08:15 crc kubenswrapper[4721]: I0202 13:08:15.255875 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:08:16 crc kubenswrapper[4721]: I0202 13:08:16.151425 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:08:25 crc kubenswrapper[4721]: I0202 13:08:25.140029 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:25 crc kubenswrapper[4721]: I0202 13:08:25.140574 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.313644 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2dsnx" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerName="console" containerID="cri-o://f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a" gracePeriod=15 Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.666100 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2dsnx_ae3f417e-2bae-44dd-973f-5314b6f64972/console/0.log" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.666177 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737420 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737764 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9h96\" (UniqueName: \"kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737822 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737852 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737922 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.737959 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config\") pod \"ae3f417e-2bae-44dd-973f-5314b6f64972\" (UID: \"ae3f417e-2bae-44dd-973f-5314b6f64972\") " Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.738940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config" (OuterVolumeSpecName: "console-config") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.738951 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.739130 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca" (OuterVolumeSpecName: "service-ca") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.739459 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.739535 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.739599 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.741900 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.743605 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.744755 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.745819 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96" (OuterVolumeSpecName: "kube-api-access-w9h96") pod "ae3f417e-2bae-44dd-973f-5314b6f64972" (UID: "ae3f417e-2bae-44dd-973f-5314b6f64972"). InnerVolumeSpecName "kube-api-access-w9h96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.841045 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.841113 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae3f417e-2bae-44dd-973f-5314b6f64972-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.841126 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae3f417e-2bae-44dd-973f-5314b6f64972-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:40 crc kubenswrapper[4721]: I0202 13:08:40.841138 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9h96\" (UniqueName: \"kubernetes.io/projected/ae3f417e-2bae-44dd-973f-5314b6f64972-kube-api-access-w9h96\") on node \"crc\" DevicePath \"\"" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.348939 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2dsnx_ae3f417e-2bae-44dd-973f-5314b6f64972/console/0.log" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.349026 4721 generic.go:334] "Generic (PLEG): container finished" podID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerID="f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a" exitCode=2 Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.349102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dsnx" event={"ID":"ae3f417e-2bae-44dd-973f-5314b6f64972","Type":"ContainerDied","Data":"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a"} Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.349148 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dsnx" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.349172 4721 scope.go:117] "RemoveContainer" containerID="f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.349152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dsnx" event={"ID":"ae3f417e-2bae-44dd-973f-5314b6f64972","Type":"ContainerDied","Data":"03d46b30c74ef5c8430b448c7ad678889af4272d7acbd48aceae6628fd4f71b5"} Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.365639 4721 scope.go:117] "RemoveContainer" containerID="f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a" Feb 02 13:08:41 crc kubenswrapper[4721]: E0202 13:08:41.366207 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a\": container with ID starting with f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a not found: ID does not exist" containerID="f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.366255 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a"} err="failed to get container status \"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a\": rpc error: code = NotFound desc = could not find container \"f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a\": container with ID starting with f6b3ae8c770ea590746df91083a751d3ccb8e36d4619a57121eebb509e13924a not found: ID does not exist" Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.389412 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:08:41 crc kubenswrapper[4721]: I0202 13:08:41.395114 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2dsnx"] Feb 02 13:08:42 crc kubenswrapper[4721]: I0202 13:08:42.418158 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" path="/var/lib/kubelet/pods/ae3f417e-2bae-44dd-973f-5314b6f64972/volumes" Feb 02 13:08:45 crc kubenswrapper[4721]: I0202 13:08:45.145466 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:08:45 crc kubenswrapper[4721]: I0202 13:08:45.150385 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-67df565f78-vmr8d" Feb 02 13:09:06 crc kubenswrapper[4721]: I0202 13:09:06.151597 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:09:06 crc kubenswrapper[4721]: I0202 13:09:06.182537 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:09:06 crc kubenswrapper[4721]: I0202 13:09:06.553667 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.896025 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:09:45 crc kubenswrapper[4721]: E0202 13:09:45.896947 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerName="console" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.896965 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerName="console" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.897093 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3f417e-2bae-44dd-973f-5314b6f64972" containerName="console" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.897644 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.913478 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.962792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.962869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658px\" (UniqueName: \"kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.962906 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.962927 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.962954 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.963006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:45 crc kubenswrapper[4721]: I0202 13:09:45.963029 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063750 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063810 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063895 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658px\" (UniqueName: \"kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063920 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063936 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.063954 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.064847 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.065082 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.065377 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.065473 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.070686 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.070834 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.079734 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658px\" (UniqueName: \"kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px\") pod \"console-858d4f646b-v8xpv\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.215117 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.422293 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:09:46 crc kubenswrapper[4721]: W0202 13:09:46.425968 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb1d5e0_e67a_459b_ad6a_794d2f8bab70.slice/crio-550ae8a7d8f413e415a65a6ec4a23601971fd9d7d7d542219e95480b30d156d6 WatchSource:0}: Error finding container 550ae8a7d8f413e415a65a6ec4a23601971fd9d7d7d542219e95480b30d156d6: Status 404 returned error can't find the container with id 550ae8a7d8f413e415a65a6ec4a23601971fd9d7d7d542219e95480b30d156d6 Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.762165 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-v8xpv" event={"ID":"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70","Type":"ContainerStarted","Data":"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66"} Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.762501 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-v8xpv" event={"ID":"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70","Type":"ContainerStarted","Data":"550ae8a7d8f413e415a65a6ec4a23601971fd9d7d7d542219e95480b30d156d6"} Feb 02 13:09:46 crc kubenswrapper[4721]: I0202 13:09:46.783156 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-858d4f646b-v8xpv" podStartSLOduration=1.7827234 podStartE2EDuration="1.7827234s" podCreationTimestamp="2026-02-02 13:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:09:46.78049025 +0000 UTC m=+527.083004649" watchObservedRunningTime="2026-02-02 13:09:46.7827234 +0000 UTC m=+527.085237809" Feb 02 13:09:56 crc kubenswrapper[4721]: I0202 13:09:56.216300 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:56 crc kubenswrapper[4721]: I0202 13:09:56.217282 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:56 crc kubenswrapper[4721]: I0202 13:09:56.222126 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:56 crc kubenswrapper[4721]: I0202 13:09:56.823322 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:09:56 crc kubenswrapper[4721]: I0202 13:09:56.883212 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:10:21 crc kubenswrapper[4721]: I0202 13:10:21.924281 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-56c598d788-j9gjl" podUID="848aa930-7630-4eed-b114-23853a30daac" containerName="console" containerID="cri-o://98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32" gracePeriod=15 Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.330736 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56c598d788-j9gjl_848aa930-7630-4eed-b114-23853a30daac/console/0.log" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.331121 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.398087 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.398139 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399134 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399144 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st584\" (UniqueName: \"kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399256 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399289 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399316 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399344 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config\") pod \"848aa930-7630-4eed-b114-23853a30daac\" (UID: \"848aa930-7630-4eed-b114-23853a30daac\") " Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.399831 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca" (OuterVolumeSpecName: "service-ca") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.400103 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config" (OuterVolumeSpecName: "console-config") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.400221 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.400244 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.400257 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.404410 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.404615 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584" (OuterVolumeSpecName: "kube-api-access-st584") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "kube-api-access-st584". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.404675 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "848aa930-7630-4eed-b114-23853a30daac" (UID: "848aa930-7630-4eed-b114-23853a30daac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.501628 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.501669 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/848aa930-7630-4eed-b114-23853a30daac-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.501679 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/848aa930-7630-4eed-b114-23853a30daac-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.501691 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st584\" (UniqueName: \"kubernetes.io/projected/848aa930-7630-4eed-b114-23853a30daac-kube-api-access-st584\") on node \"crc\" DevicePath \"\"" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967416 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56c598d788-j9gjl_848aa930-7630-4eed-b114-23853a30daac/console/0.log" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967712 4721 generic.go:334] "Generic (PLEG): container finished" podID="848aa930-7630-4eed-b114-23853a30daac" containerID="98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32" exitCode=2 Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967747 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c598d788-j9gjl" event={"ID":"848aa930-7630-4eed-b114-23853a30daac","Type":"ContainerDied","Data":"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32"} Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c598d788-j9gjl" event={"ID":"848aa930-7630-4eed-b114-23853a30daac","Type":"ContainerDied","Data":"fcb6049175d618686bd24850fd66d3afd1a92f853e4d4ca21439dea1da41c941"} Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967807 4721 scope.go:117] "RemoveContainer" containerID="98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.967817 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c598d788-j9gjl" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.986549 4721 scope.go:117] "RemoveContainer" containerID="98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32" Feb 02 13:10:22 crc kubenswrapper[4721]: E0202 13:10:22.987044 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32\": container with ID starting with 98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32 not found: ID does not exist" containerID="98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.987105 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32"} err="failed to get container status \"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32\": rpc error: code = NotFound desc = could not find container \"98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32\": container with ID starting with 98c0eac20465229d8bff6225571570b540d801c33450e2e81bc8f079fbc7cf32 not found: ID does not exist" Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.989954 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:10:22 crc kubenswrapper[4721]: I0202 13:10:22.992201 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-56c598d788-j9gjl"] Feb 02 13:10:24 crc kubenswrapper[4721]: I0202 13:10:24.417173 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848aa930-7630-4eed-b114-23853a30daac" path="/var/lib/kubelet/pods/848aa930-7630-4eed-b114-23853a30daac/volumes" Feb 02 13:10:44 crc kubenswrapper[4721]: I0202 13:10:44.763931 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:10:44 crc kubenswrapper[4721]: I0202 13:10:44.764404 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:11:14 crc kubenswrapper[4721]: I0202 13:11:14.764319 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:11:14 crc kubenswrapper[4721]: I0202 13:11:14.765038 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:11:44 crc kubenswrapper[4721]: I0202 13:11:44.764058 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:11:44 crc kubenswrapper[4721]: I0202 13:11:44.764701 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:11:44 crc kubenswrapper[4721]: I0202 13:11:44.764760 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:11:44 crc kubenswrapper[4721]: I0202 13:11:44.765503 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:11:44 crc kubenswrapper[4721]: I0202 13:11:44.765563 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7" gracePeriod=600 Feb 02 13:11:45 crc kubenswrapper[4721]: I0202 13:11:45.477588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7"} Feb 02 13:11:45 crc kubenswrapper[4721]: I0202 13:11:45.478200 4721 scope.go:117] "RemoveContainer" containerID="b87a5ff0b46a353772eefb47418659c5151aa34831e5ecd0ccf0f601bac1ccfe" Feb 02 13:11:45 crc kubenswrapper[4721]: I0202 13:11:45.477540 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7" exitCode=0 Feb 02 13:11:45 crc kubenswrapper[4721]: I0202 13:11:45.478333 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc"} Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.859794 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj"] Feb 02 13:12:08 crc kubenswrapper[4721]: E0202 13:12:08.860412 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848aa930-7630-4eed-b114-23853a30daac" containerName="console" Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.860423 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="848aa930-7630-4eed-b114-23853a30daac" containerName="console" Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.860525 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="848aa930-7630-4eed-b114-23853a30daac" containerName="console" Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.861322 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.862849 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:12:08 crc kubenswrapper[4721]: I0202 13:12:08.872361 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj"] Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.016981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.017035 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgvg\" (UniqueName: \"kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.017153 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.118386 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.118538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.118596 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgvg\" (UniqueName: \"kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.118990 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.118990 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.141360 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgvg\" (UniqueName: \"kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.181956 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.588355 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj"] Feb 02 13:12:09 crc kubenswrapper[4721]: I0202 13:12:09.647712 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" event={"ID":"6b6f1f89-2c62-4c26-abd3-2d105289fc8c","Type":"ContainerStarted","Data":"f4cef8bac972b217cde6a5c6238cad0bafecad68a72372f37b1a4bb8d224d7aa"} Feb 02 13:12:10 crc kubenswrapper[4721]: I0202 13:12:10.654829 4721 generic.go:334] "Generic (PLEG): container finished" podID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerID="b55b1e33f819c0c2f22ef5379434b120bd261d0b70b6b470a047653e44c7773e" exitCode=0 Feb 02 13:12:10 crc kubenswrapper[4721]: I0202 13:12:10.654947 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" event={"ID":"6b6f1f89-2c62-4c26-abd3-2d105289fc8c","Type":"ContainerDied","Data":"b55b1e33f819c0c2f22ef5379434b120bd261d0b70b6b470a047653e44c7773e"} Feb 02 13:12:10 crc kubenswrapper[4721]: I0202 13:12:10.658144 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:12:12 crc kubenswrapper[4721]: I0202 13:12:12.668761 4721 generic.go:334] "Generic (PLEG): container finished" podID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerID="fa5963647cb5da5e7b22f8c4fc93bba86a1ea910c234da502ff074c7aae60cc6" exitCode=0 Feb 02 13:12:12 crc kubenswrapper[4721]: I0202 13:12:12.668803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" event={"ID":"6b6f1f89-2c62-4c26-abd3-2d105289fc8c","Type":"ContainerDied","Data":"fa5963647cb5da5e7b22f8c4fc93bba86a1ea910c234da502ff074c7aae60cc6"} Feb 02 13:12:13 crc kubenswrapper[4721]: I0202 13:12:13.676674 4721 generic.go:334] "Generic (PLEG): container finished" podID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerID="14a0661444512e8d743ac85d013769c639534bc5f2b5c95c34e6575b49d7b455" exitCode=0 Feb 02 13:12:13 crc kubenswrapper[4721]: I0202 13:12:13.676738 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" event={"ID":"6b6f1f89-2c62-4c26-abd3-2d105289fc8c","Type":"ContainerDied","Data":"14a0661444512e8d743ac85d013769c639534bc5f2b5c95c34e6575b49d7b455"} Feb 02 13:12:14 crc kubenswrapper[4721]: I0202 13:12:14.924655 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.017955 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle\") pod \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.018057 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdgvg\" (UniqueName: \"kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg\") pod \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.018215 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util\") pod \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\" (UID: \"6b6f1f89-2c62-4c26-abd3-2d105289fc8c\") " Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.019910 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle" (OuterVolumeSpecName: "bundle") pod "6b6f1f89-2c62-4c26-abd3-2d105289fc8c" (UID: "6b6f1f89-2c62-4c26-abd3-2d105289fc8c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.026753 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg" (OuterVolumeSpecName: "kube-api-access-cdgvg") pod "6b6f1f89-2c62-4c26-abd3-2d105289fc8c" (UID: "6b6f1f89-2c62-4c26-abd3-2d105289fc8c"). InnerVolumeSpecName "kube-api-access-cdgvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.119285 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.119334 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdgvg\" (UniqueName: \"kubernetes.io/projected/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-kube-api-access-cdgvg\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.324711 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util" (OuterVolumeSpecName: "util") pod "6b6f1f89-2c62-4c26-abd3-2d105289fc8c" (UID: "6b6f1f89-2c62-4c26-abd3-2d105289fc8c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.425331 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b6f1f89-2c62-4c26-abd3-2d105289fc8c-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.694498 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" event={"ID":"6b6f1f89-2c62-4c26-abd3-2d105289fc8c","Type":"ContainerDied","Data":"f4cef8bac972b217cde6a5c6238cad0bafecad68a72372f37b1a4bb8d224d7aa"} Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.694542 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4cef8bac972b217cde6a5c6238cad0bafecad68a72372f37b1a4bb8d224d7aa" Feb 02 13:12:15 crc kubenswrapper[4721]: I0202 13:12:15.694557 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.054451 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwcs2"] Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055473 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-controller" containerID="cri-o://30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055837 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="sbdb" containerID="cri-o://27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055880 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="nbdb" containerID="cri-o://3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055912 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="northd" containerID="cri-o://677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055940 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.055994 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-node" containerID="cri-o://dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.056027 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-acl-logging" containerID="cri-o://4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.098972 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" containerID="cri-o://3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" gracePeriod=30 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.739242 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovnkube-controller/3.log" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.742005 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-acl-logging/0.log" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.742653 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-controller/0.log" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744303 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" exitCode=0 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744366 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" exitCode=0 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744377 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" exitCode=0 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744386 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" exitCode=0 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744396 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" exitCode=143 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744407 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" exitCode=143 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744385 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744532 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744550 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744575 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.744596 4721 scope.go:117] "RemoveContainer" containerID="0ef87877d327f3682656924644afe911470b558fe3fd45dd708e4d6f0aa69f29" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.747889 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/2.log" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.749385 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/1.log" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.749426 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" containerID="c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d" exitCode=2 Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.749452 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerDied","Data":"c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d"} Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.749928 4721 scope.go:117] "RemoveContainer" containerID="c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d" Feb 02 13:12:20 crc kubenswrapper[4721]: E0202 13:12:20.750121 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ltw7d_openshift-multus(5ba84858-caaa-4fba-8eaf-9f7ddece0b3a)\"" pod="openshift-multus/multus-ltw7d" podUID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" Feb 02 13:12:20 crc kubenswrapper[4721]: I0202 13:12:20.770512 4721 scope.go:117] "RemoveContainer" containerID="3e01f486c1be69ceb8e869e6feefbef4e07fd230f5cb41ec3adfaaba36430569" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.234873 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-acl-logging/0.log" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.235333 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-controller/0.log" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.235643 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328562 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328616 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328656 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88f6w\" (UniqueName: \"kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328746 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328765 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328791 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328768 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328830 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328869 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328872 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket" (OuterVolumeSpecName: "log-socket") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328897 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328909 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328944 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328976 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.328997 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329039 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329058 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329107 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329128 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329156 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329193 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd\") pod \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\" (UID: \"b15bc48d-f88d-4b38-a9e1-00bb00b88a52\") " Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329489 4721 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329504 4721 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329515 4721 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329530 4721 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.329603 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330193 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330265 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash" (OuterVolumeSpecName: "host-slash") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330288 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330364 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330575 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log" (OuterVolumeSpecName: "node-log") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330657 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.330982 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.331008 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.331021 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.331045 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.331054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.341747 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w" (OuterVolumeSpecName: "kube-api-access-88f6w") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "kube-api-access-88f6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.373432 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394358 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8dtnt"] Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394670 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="northd" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394688 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="northd" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394700 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-acl-logging" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394708 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-acl-logging" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394722 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394730 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394741 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394750 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394762 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394770 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394782 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-node" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394789 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-node" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394803 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kubecfg-setup" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394811 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kubecfg-setup" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394822 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="extract" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394831 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="extract" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394843 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394849 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394859 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="pull" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394868 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="pull" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394877 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394884 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394892 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="nbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394899 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="nbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394909 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="util" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394915 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="util" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.394926 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="sbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.394933 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="sbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395055 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395070 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395094 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6f1f89-2c62-4c26-abd3-2d105289fc8c" containerName="extract" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395107 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395116 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovn-acl-logging" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395123 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="northd" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395131 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="nbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395150 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="sbdb" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395163 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="kube-rbac-proxy-node" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395173 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395181 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395190 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395202 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.395328 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395337 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.395604 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.395615 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerName="ovnkube-controller" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.396354 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b15bc48d-f88d-4b38-a9e1-00bb00b88a52" (UID: "b15bc48d-f88d-4b38-a9e1-00bb00b88a52"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.397646 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.432246 4721 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.432792 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88f6w\" (UniqueName: \"kubernetes.io/projected/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-kube-api-access-88f6w\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.432859 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.432930 4721 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.432996 4721 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433063 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433145 4721 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433207 4721 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433260 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433309 4721 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433356 4721 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433403 4721 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433452 4721 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433506 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433553 4721 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.433598 4721 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b15bc48d-f88d-4b38-a9e1-00bb00b88a52-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.534842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntkf\" (UniqueName: \"kubernetes.io/projected/1733f0ca-4783-4099-b66d-b497993def10-kube-api-access-vntkf\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.534899 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-bin\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.534956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-var-lib-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.534979 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-kubelet\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535004 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535025 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-slash\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535047 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-systemd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535111 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535156 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-netd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535184 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-config\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535211 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-netns\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535238 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-etc-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-log-socket\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-script-lib\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-systemd-units\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535331 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-node-log\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535367 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-env-overrides\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535403 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1733f0ca-4783-4099-b66d-b497993def10-ovn-node-metrics-cert\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.535452 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-ovn\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.636929 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-slash\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637238 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-systemd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637353 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637418 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637128 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-slash\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637282 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-systemd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637585 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-netd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-netd\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637746 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-config\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-netns\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.637935 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-etc-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-log-socket\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638080 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-etc-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638057 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-log-socket\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638005 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-netns\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638267 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-script-lib\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638399 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-systemd-units\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638495 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-node-log\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638558 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-node-log\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-systemd-units\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638659 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-config\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638744 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-env-overrides\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639152 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1733f0ca-4783-4099-b66d-b497993def10-ovn-node-metrics-cert\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639264 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-ovn\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntkf\" (UniqueName: \"kubernetes.io/projected/1733f0ca-4783-4099-b66d-b497993def10-kube-api-access-vntkf\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-bin\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639568 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-cni-bin\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639322 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-run-ovn\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.638947 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639297 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-env-overrides\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639097 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1733f0ca-4783-4099-b66d-b497993def10-ovnkube-script-lib\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639814 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-var-lib-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639910 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-kubelet\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639978 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-kubelet\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.639861 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-var-lib-openvswitch\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.640190 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.640298 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1733f0ca-4783-4099-b66d-b497993def10-host-run-ovn-kubernetes\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.642589 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1733f0ca-4783-4099-b66d-b497993def10-ovn-node-metrics-cert\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.659336 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntkf\" (UniqueName: \"kubernetes.io/projected/1733f0ca-4783-4099-b66d-b497993def10-kube-api-access-vntkf\") pod \"ovnkube-node-8dtnt\" (UID: \"1733f0ca-4783-4099-b66d-b497993def10\") " pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.722012 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.755580 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"916fed5acbc7fa0d10fdd8ecb098433cb96ff7234cffebd73f037c512862f82a"} Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.760154 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-acl-logging/0.log" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.760865 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwcs2_b15bc48d-f88d-4b38-a9e1-00bb00b88a52/ovn-controller/0.log" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761449 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" exitCode=0 Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761482 4721 generic.go:334] "Generic (PLEG): container finished" podID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" containerID="dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" exitCode=0 Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3"} Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761544 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6"} Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761557 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" event={"ID":"b15bc48d-f88d-4b38-a9e1-00bb00b88a52","Type":"ContainerDied","Data":"1aec1a12b2d4ba708a47d40a6a1a8e146d69e8b9d3bc97bc1257a9e8fc573862"} Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761578 4721 scope.go:117] "RemoveContainer" containerID="3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.761632 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwcs2" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.763617 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/2.log" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.779115 4721 scope.go:117] "RemoveContainer" containerID="27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.801731 4721 scope.go:117] "RemoveContainer" containerID="3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.819283 4721 scope.go:117] "RemoveContainer" containerID="677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.819902 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwcs2"] Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.824302 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwcs2"] Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.842737 4721 scope.go:117] "RemoveContainer" containerID="51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.856052 4721 scope.go:117] "RemoveContainer" containerID="dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.870209 4721 scope.go:117] "RemoveContainer" containerID="4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.885351 4721 scope.go:117] "RemoveContainer" containerID="30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.912947 4721 scope.go:117] "RemoveContainer" containerID="406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.937486 4721 scope.go:117] "RemoveContainer" containerID="3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.938072 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464\": container with ID starting with 3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464 not found: ID does not exist" containerID="3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.938114 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464"} err="failed to get container status \"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464\": rpc error: code = NotFound desc = could not find container \"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464\": container with ID starting with 3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.938145 4721 scope.go:117] "RemoveContainer" containerID="27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.940181 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\": container with ID starting with 27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a not found: ID does not exist" containerID="27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.940214 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a"} err="failed to get container status \"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\": rpc error: code = NotFound desc = could not find container \"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\": container with ID starting with 27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.940246 4721 scope.go:117] "RemoveContainer" containerID="3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.940630 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\": container with ID starting with 3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5 not found: ID does not exist" containerID="3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.940657 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5"} err="failed to get container status \"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\": rpc error: code = NotFound desc = could not find container \"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\": container with ID starting with 3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.940684 4721 scope.go:117] "RemoveContainer" containerID="677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.941144 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\": container with ID starting with 677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34 not found: ID does not exist" containerID="677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941179 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34"} err="failed to get container status \"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\": rpc error: code = NotFound desc = could not find container \"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\": container with ID starting with 677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941202 4721 scope.go:117] "RemoveContainer" containerID="51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.941491 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\": container with ID starting with 51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3 not found: ID does not exist" containerID="51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941522 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3"} err="failed to get container status \"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\": rpc error: code = NotFound desc = could not find container \"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\": container with ID starting with 51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941544 4721 scope.go:117] "RemoveContainer" containerID="dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.941779 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\": container with ID starting with dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6 not found: ID does not exist" containerID="dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941805 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6"} err="failed to get container status \"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\": rpc error: code = NotFound desc = could not find container \"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\": container with ID starting with dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.941824 4721 scope.go:117] "RemoveContainer" containerID="4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.942058 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\": container with ID starting with 4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301 not found: ID does not exist" containerID="4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942096 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301"} err="failed to get container status \"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\": rpc error: code = NotFound desc = could not find container \"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\": container with ID starting with 4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942111 4721 scope.go:117] "RemoveContainer" containerID="30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.942311 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\": container with ID starting with 30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8 not found: ID does not exist" containerID="30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942343 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8"} err="failed to get container status \"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\": rpc error: code = NotFound desc = could not find container \"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\": container with ID starting with 30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942361 4721 scope.go:117] "RemoveContainer" containerID="406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b" Feb 02 13:12:21 crc kubenswrapper[4721]: E0202 13:12:21.942612 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\": container with ID starting with 406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b not found: ID does not exist" containerID="406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942637 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b"} err="failed to get container status \"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\": rpc error: code = NotFound desc = could not find container \"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\": container with ID starting with 406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942654 4721 scope.go:117] "RemoveContainer" containerID="3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942826 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464"} err="failed to get container status \"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464\": rpc error: code = NotFound desc = could not find container \"3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464\": container with ID starting with 3d1bf13cf098a5769bf003590f59136941db8ae4e0650114deb6dfec3a6e6464 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.942849 4721 scope.go:117] "RemoveContainer" containerID="27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943025 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a"} err="failed to get container status \"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\": rpc error: code = NotFound desc = could not find container \"27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a\": container with ID starting with 27e989df998ab4c64e1ddde284d04945c0183fac9a5f452ce859af9548ae292a not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943045 4721 scope.go:117] "RemoveContainer" containerID="3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943382 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5"} err="failed to get container status \"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\": rpc error: code = NotFound desc = could not find container \"3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5\": container with ID starting with 3f9377b4f5108a5e0273594062e1c210fc2dbf2e2a27ca130fc2c669d91007b5 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943404 4721 scope.go:117] "RemoveContainer" containerID="677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943743 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34"} err="failed to get container status \"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\": rpc error: code = NotFound desc = could not find container \"677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34\": container with ID starting with 677e97a9c5184f19a782e3fb3881ac573f2cfe36d5965cfb7d85937a0a49ac34 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.943772 4721 scope.go:117] "RemoveContainer" containerID="51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944044 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3"} err="failed to get container status \"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\": rpc error: code = NotFound desc = could not find container \"51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3\": container with ID starting with 51ef6bdd231cafcaecaee4d6c7a6ca1c4b54947522020d49e19421c63699eac3 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944094 4721 scope.go:117] "RemoveContainer" containerID="dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944409 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6"} err="failed to get container status \"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\": rpc error: code = NotFound desc = could not find container \"dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6\": container with ID starting with dc83b18fdeff15b470cdfd05ed235d5e1a7b10d3e8a59e8d9ece43c0a4f3f6e6 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944428 4721 scope.go:117] "RemoveContainer" containerID="4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944631 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301"} err="failed to get container status \"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\": rpc error: code = NotFound desc = could not find container \"4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301\": container with ID starting with 4d5822b1d1a041866f584d6ec48869a2aed81ed3bcc7cd6b274875c81b71f301 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944657 4721 scope.go:117] "RemoveContainer" containerID="30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944890 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8"} err="failed to get container status \"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\": rpc error: code = NotFound desc = could not find container \"30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8\": container with ID starting with 30c12d847ad570843fcdf211e47b860d378600023820faa9095937f4ffe713e8 not found: ID does not exist" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.944916 4721 scope.go:117] "RemoveContainer" containerID="406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b" Feb 02 13:12:21 crc kubenswrapper[4721]: I0202 13:12:21.945153 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b"} err="failed to get container status \"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\": rpc error: code = NotFound desc = could not find container \"406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b\": container with ID starting with 406d5bcd32d4f27950ac751d73dccb89a8807311e8ec53595606b4c10a6dc67b not found: ID does not exist" Feb 02 13:12:22 crc kubenswrapper[4721]: I0202 13:12:22.417435 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15bc48d-f88d-4b38-a9e1-00bb00b88a52" path="/var/lib/kubelet/pods/b15bc48d-f88d-4b38-a9e1-00bb00b88a52/volumes" Feb 02 13:12:22 crc kubenswrapper[4721]: I0202 13:12:22.773544 4721 generic.go:334] "Generic (PLEG): container finished" podID="1733f0ca-4783-4099-b66d-b497993def10" containerID="d5be3aa663efe1316c680faa622ca237b5b86f03f042993c0996d47a7590e74d" exitCode=0 Feb 02 13:12:22 crc kubenswrapper[4721]: I0202 13:12:22.773610 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerDied","Data":"d5be3aa663efe1316c680faa622ca237b5b86f03f042993c0996d47a7590e74d"} Feb 02 13:12:23 crc kubenswrapper[4721]: I0202 13:12:23.782302 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"6d3bdaa9a13a718f47d3ec08be258880a297485f5f530643cd81fb43bde5080c"} Feb 02 13:12:23 crc kubenswrapper[4721]: I0202 13:12:23.782639 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"ce0c4d525718d4161300ff87f422840d6d7466f6fd67edc8119125210e7679db"} Feb 02 13:12:23 crc kubenswrapper[4721]: I0202 13:12:23.782656 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"b05b359c6c24dec978ec5f554107a55e11aabf0c5b2ec336c75619e2d32e34f8"} Feb 02 13:12:23 crc kubenswrapper[4721]: I0202 13:12:23.782691 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"bddb1128d3b459560504046faf54d940a06531590b0482d4fa2c39d39ac21d68"} Feb 02 13:12:24 crc kubenswrapper[4721]: I0202 13:12:24.790556 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"4e9b834c790e6421a7228961ab52a38a1f1339bcd72b9be601c4f9752feff355"} Feb 02 13:12:24 crc kubenswrapper[4721]: I0202 13:12:24.790800 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"9d383725cf59ebeabd80fd7da95fa1c4fd990c1dd13d8362360f03963bc75b72"} Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.265822 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt"] Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.266909 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.268526 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fm7xc" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.269024 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.269179 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.400822 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx"] Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.401480 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.403292 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.403771 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcw2\" (UniqueName: \"kubernetes.io/projected/30a5c0d6-c773-4914-a3b1-1654a51817a9-kube-api-access-6pcw2\") pod \"obo-prometheus-operator-68bc856cb9-jltbt\" (UID: \"30a5c0d6-c773-4914-a3b1-1654a51817a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.404008 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5h9g6" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.427775 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9"] Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.428829 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.505350 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.505401 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.505440 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.505672 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcw2\" (UniqueName: \"kubernetes.io/projected/30a5c0d6-c773-4914-a3b1-1654a51817a9-kube-api-access-6pcw2\") pod \"obo-prometheus-operator-68bc856cb9-jltbt\" (UID: \"30a5c0d6-c773-4914-a3b1-1654a51817a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.505810 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.528906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcw2\" (UniqueName: \"kubernetes.io/projected/30a5c0d6-c773-4914-a3b1-1654a51817a9-kube-api-access-6pcw2\") pod \"obo-prometheus-operator-68bc856cb9-jltbt\" (UID: \"30a5c0d6-c773-4914-a3b1-1654a51817a9\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.583681 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.607280 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.607349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.607419 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.607471 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.608734 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b2dk9"] Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.611571 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.612010 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.613909 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1f92c36-0e50-485e-a728-7b42f1ab44c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx\" (UID: \"a1f92c36-0e50-485e-a728-7b42f1ab44c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.615518 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41c83c75-cfc8-4c33-97cf-484cc7dcd812-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9\" (UID: \"41c83c75-cfc8-4c33-97cf-484cc7dcd812\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.616823 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.619939 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-lfslk" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.620169 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.646595 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(79672fbdaf8c685ac07766c52286d5c81ef8d69be2e667ad19714265c129754c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.646662 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(79672fbdaf8c685ac07766c52286d5c81ef8d69be2e667ad19714265c129754c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.646688 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(79672fbdaf8c685ac07766c52286d5c81ef8d69be2e667ad19714265c129754c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.646727 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(79672fbdaf8c685ac07766c52286d5c81ef8d69be2e667ad19714265c129754c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" podUID="30a5c0d6-c773-4914-a3b1-1654a51817a9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.708312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.708656 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mx5\" (UniqueName: \"kubernetes.io/projected/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-kube-api-access-b8mx5\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.717227 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.739977 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(7b53255c9c2462398e11314e80045559915817c75fa7fef85957c68092021385): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.740060 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(7b53255c9c2462398e11314e80045559915817c75fa7fef85957c68092021385): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.740107 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(7b53255c9c2462398e11314e80045559915817c75fa7fef85957c68092021385): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.740162 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(7b53255c9c2462398e11314e80045559915817c75fa7fef85957c68092021385): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" podUID="a1f92c36-0e50-485e-a728-7b42f1ab44c4" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.743537 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.766018 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(fcbfbf0412edb28b35426e81591ffd922262d9a3013d56745bc3e277e40a1672): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.766104 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(fcbfbf0412edb28b35426e81591ffd922262d9a3013d56745bc3e277e40a1672): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.766131 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(fcbfbf0412edb28b35426e81591ffd922262d9a3013d56745bc3e277e40a1672): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.766182 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(fcbfbf0412edb28b35426e81591ffd922262d9a3013d56745bc3e277e40a1672): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" podUID="41c83c75-cfc8-4c33-97cf-484cc7dcd812" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.805015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"07603a2f12ad4b3eec83c311a37dd5f84245f8bd2803504e03e20579f16dcce5"} Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.808940 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5w6sx"] Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.809706 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.809962 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.810126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mx5\" (UniqueName: \"kubernetes.io/projected/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-kube-api-access-b8mx5\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.814096 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.814389 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6h7r5" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.830743 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mx5\" (UniqueName: \"kubernetes.io/projected/7ac0e2d1-4762-4c40-84c9-db0bde4f956f-kube-api-access-b8mx5\") pod \"observability-operator-59bdc8b94-b2dk9\" (UID: \"7ac0e2d1-4762-4c40-84c9-db0bde4f956f\") " pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.911350 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3affad2-ab35-4604-8239-56f69bf3727f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.911396 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksr2c\" (UniqueName: \"kubernetes.io/projected/a3affad2-ab35-4604-8239-56f69bf3727f-kube-api-access-ksr2c\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:26 crc kubenswrapper[4721]: I0202 13:12:26.980134 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.998930 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(3e6cd8595f687626e31bfe27e13ca08eb7499fc327ff2c360e56822c8bee104b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.999026 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(3e6cd8595f687626e31bfe27e13ca08eb7499fc327ff2c360e56822c8bee104b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.999056 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(3e6cd8595f687626e31bfe27e13ca08eb7499fc327ff2c360e56822c8bee104b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:26 crc kubenswrapper[4721]: E0202 13:12:26.999125 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(3e6cd8595f687626e31bfe27e13ca08eb7499fc327ff2c360e56822c8bee104b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" podUID="7ac0e2d1-4762-4c40-84c9-db0bde4f956f" Feb 02 13:12:27 crc kubenswrapper[4721]: I0202 13:12:27.012983 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksr2c\" (UniqueName: \"kubernetes.io/projected/a3affad2-ab35-4604-8239-56f69bf3727f-kube-api-access-ksr2c\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: I0202 13:12:27.013138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3affad2-ab35-4604-8239-56f69bf3727f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: I0202 13:12:27.014131 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3affad2-ab35-4604-8239-56f69bf3727f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: I0202 13:12:27.037423 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksr2c\" (UniqueName: \"kubernetes.io/projected/a3affad2-ab35-4604-8239-56f69bf3727f-kube-api-access-ksr2c\") pod \"perses-operator-5bf474d74f-5w6sx\" (UID: \"a3affad2-ab35-4604-8239-56f69bf3727f\") " pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: I0202 13:12:27.175751 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: E0202 13:12:27.206561 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(559b440f5cba3e7d109837dbc854906999eca986a89a4fb86a01200233f035ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:27 crc kubenswrapper[4721]: E0202 13:12:27.206644 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(559b440f5cba3e7d109837dbc854906999eca986a89a4fb86a01200233f035ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: E0202 13:12:27.206664 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(559b440f5cba3e7d109837dbc854906999eca986a89a4fb86a01200233f035ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:27 crc kubenswrapper[4721]: E0202 13:12:27.206709 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(559b440f5cba3e7d109837dbc854906999eca986a89a4fb86a01200233f035ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" podUID="a3affad2-ab35-4604-8239-56f69bf3727f" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.823616 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" event={"ID":"1733f0ca-4783-4099-b66d-b497993def10","Type":"ContainerStarted","Data":"a0d41756427f2961f838ea426ab5e3b3ff7ceafb5de64c0df4fd9d48d8c17bc0"} Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.824341 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.824378 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.824423 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.852293 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.856361 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:28 crc kubenswrapper[4721]: I0202 13:12:28.873760 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" podStartSLOduration=7.873744889 podStartE2EDuration="7.873744889s" podCreationTimestamp="2026-02-02 13:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:12:28.872339692 +0000 UTC m=+689.174854101" watchObservedRunningTime="2026-02-02 13:12:28.873744889 +0000 UTC m=+689.176259278" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.359302 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b2dk9"] Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.359414 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.359815 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.370048 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt"] Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.370361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.370938 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.374489 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5w6sx"] Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.374775 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.375394 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.378698 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx"] Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.378844 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.379437 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.394137 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9"] Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.394261 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:29 crc kubenswrapper[4721]: I0202 13:12:29.394809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.433628 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(e13cadae6e67404dc5cc138b05d1a002778f8c87b9f279d8f66c027a11e6b233): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.433991 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(e13cadae6e67404dc5cc138b05d1a002778f8c87b9f279d8f66c027a11e6b233): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.434021 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(e13cadae6e67404dc5cc138b05d1a002778f8c87b9f279d8f66c027a11e6b233): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.434094 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(e13cadae6e67404dc5cc138b05d1a002778f8c87b9f279d8f66c027a11e6b233): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" podUID="7ac0e2d1-4762-4c40-84c9-db0bde4f956f" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.445936 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(744565d13fa0b7ab4ffc378ca0de13e51c331edae3ce5a6bc266a3e354e1199d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.445998 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(744565d13fa0b7ab4ffc378ca0de13e51c331edae3ce5a6bc266a3e354e1199d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.446018 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(744565d13fa0b7ab4ffc378ca0de13e51c331edae3ce5a6bc266a3e354e1199d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.446107 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(744565d13fa0b7ab4ffc378ca0de13e51c331edae3ce5a6bc266a3e354e1199d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" podUID="30a5c0d6-c773-4914-a3b1-1654a51817a9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.477463 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(31249dcddf3c0fc57945822e001a973e64dee694abb2cf2ab341207103ac9f47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.477537 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(31249dcddf3c0fc57945822e001a973e64dee694abb2cf2ab341207103ac9f47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.477557 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(31249dcddf3c0fc57945822e001a973e64dee694abb2cf2ab341207103ac9f47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.477608 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(31249dcddf3c0fc57945822e001a973e64dee694abb2cf2ab341207103ac9f47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" podUID="a1f92c36-0e50-485e-a728-7b42f1ab44c4" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.485850 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d132a561b69802e29a8ee38347c502a9fd189fae8c8a62b003c376c63023b43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.485928 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d132a561b69802e29a8ee38347c502a9fd189fae8c8a62b003c376c63023b43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.485952 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d132a561b69802e29a8ee38347c502a9fd189fae8c8a62b003c376c63023b43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.486009 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d132a561b69802e29a8ee38347c502a9fd189fae8c8a62b003c376c63023b43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" podUID="a3affad2-ab35-4604-8239-56f69bf3727f" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.498951 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(925a509b9c0ad7623eb20fdd705a0eab537b0e01d6f3709632c58d5ca1f0b964): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.499010 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(925a509b9c0ad7623eb20fdd705a0eab537b0e01d6f3709632c58d5ca1f0b964): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.499030 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(925a509b9c0ad7623eb20fdd705a0eab537b0e01d6f3709632c58d5ca1f0b964): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:29 crc kubenswrapper[4721]: E0202 13:12:29.499089 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(925a509b9c0ad7623eb20fdd705a0eab537b0e01d6f3709632c58d5ca1f0b964): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" podUID="41c83c75-cfc8-4c33-97cf-484cc7dcd812" Feb 02 13:12:35 crc kubenswrapper[4721]: I0202 13:12:35.409290 4721 scope.go:117] "RemoveContainer" containerID="c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d" Feb 02 13:12:35 crc kubenswrapper[4721]: E0202 13:12:35.409894 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ltw7d_openshift-multus(5ba84858-caaa-4fba-8eaf-9f7ddece0b3a)\"" pod="openshift-multus/multus-ltw7d" podUID="5ba84858-caaa-4fba-8eaf-9f7ddece0b3a" Feb 02 13:12:41 crc kubenswrapper[4721]: I0202 13:12:41.409587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:41 crc kubenswrapper[4721]: I0202 13:12:41.410861 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:41 crc kubenswrapper[4721]: E0202 13:12:41.437084 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(899f8c0ce7a06f9795d4a4d32db0b45716345b6ca2a00aa1464994e8e1344961): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:41 crc kubenswrapper[4721]: E0202 13:12:41.437151 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(899f8c0ce7a06f9795d4a4d32db0b45716345b6ca2a00aa1464994e8e1344961): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:41 crc kubenswrapper[4721]: E0202 13:12:41.437173 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(899f8c0ce7a06f9795d4a4d32db0b45716345b6ca2a00aa1464994e8e1344961): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:41 crc kubenswrapper[4721]: E0202 13:12:41.437220 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators(30a5c0d6-c773-4914-a3b1-1654a51817a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jltbt_openshift-operators_30a5c0d6-c773-4914-a3b1-1654a51817a9_0(899f8c0ce7a06f9795d4a4d32db0b45716345b6ca2a00aa1464994e8e1344961): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" podUID="30a5c0d6-c773-4914-a3b1-1654a51817a9" Feb 02 13:12:42 crc kubenswrapper[4721]: I0202 13:12:42.408980 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:42 crc kubenswrapper[4721]: I0202 13:12:42.409259 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:42 crc kubenswrapper[4721]: I0202 13:12:42.411927 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:42 crc kubenswrapper[4721]: I0202 13:12:42.415162 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.452360 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(45c55302ef56cadb0d8b014e10cc03381dacc7de88d6435b11300183368c7e46): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.452444 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(45c55302ef56cadb0d8b014e10cc03381dacc7de88d6435b11300183368c7e46): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.452465 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(45c55302ef56cadb0d8b014e10cc03381dacc7de88d6435b11300183368c7e46): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.452544 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators(41c83c75-cfc8-4c33-97cf-484cc7dcd812)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_openshift-operators_41c83c75-cfc8-4c33-97cf-484cc7dcd812_0(45c55302ef56cadb0d8b014e10cc03381dacc7de88d6435b11300183368c7e46): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" podUID="41c83c75-cfc8-4c33-97cf-484cc7dcd812" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.462334 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d3a4c19a1c7ed3a22d9a1c5e20de30186a33a2375386e695d2e2a32068b427b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.462407 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d3a4c19a1c7ed3a22d9a1c5e20de30186a33a2375386e695d2e2a32068b427b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.462435 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d3a4c19a1c7ed3a22d9a1c5e20de30186a33a2375386e695d2e2a32068b427b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:42 crc kubenswrapper[4721]: E0202 13:12:42.462485 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5w6sx_openshift-operators(a3affad2-ab35-4604-8239-56f69bf3727f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5w6sx_openshift-operators_a3affad2-ab35-4604-8239-56f69bf3727f_0(8d3a4c19a1c7ed3a22d9a1c5e20de30186a33a2375386e695d2e2a32068b427b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" podUID="a3affad2-ab35-4604-8239-56f69bf3727f" Feb 02 13:12:44 crc kubenswrapper[4721]: I0202 13:12:44.409309 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:44 crc kubenswrapper[4721]: I0202 13:12:44.409326 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:44 crc kubenswrapper[4721]: I0202 13:12:44.410184 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:44 crc kubenswrapper[4721]: I0202 13:12:44.410398 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.462834 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(0611a943d53f331cf56689346926abfb020aa089e33ec02826f61b3548f26290): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.462911 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(0611a943d53f331cf56689346926abfb020aa089e33ec02826f61b3548f26290): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.462932 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(0611a943d53f331cf56689346926abfb020aa089e33ec02826f61b3548f26290): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.462977 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators(a1f92c36-0e50-485e-a728-7b42f1ab44c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_openshift-operators_a1f92c36-0e50-485e-a728-7b42f1ab44c4_0(0611a943d53f331cf56689346926abfb020aa089e33ec02826f61b3548f26290): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" podUID="a1f92c36-0e50-485e-a728-7b42f1ab44c4" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.467311 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(40000d12aa3de8bcf4ce996ce73297b583c46039f3b66517b02b6fc3487e9d31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.467357 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(40000d12aa3de8bcf4ce996ce73297b583c46039f3b66517b02b6fc3487e9d31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.467376 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(40000d12aa3de8bcf4ce996ce73297b583c46039f3b66517b02b6fc3487e9d31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:44 crc kubenswrapper[4721]: E0202 13:12:44.467412 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-b2dk9_openshift-operators(7ac0e2d1-4762-4c40-84c9-db0bde4f956f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b2dk9_openshift-operators_7ac0e2d1-4762-4c40-84c9-db0bde4f956f_0(40000d12aa3de8bcf4ce996ce73297b583c46039f3b66517b02b6fc3487e9d31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" podUID="7ac0e2d1-4762-4c40-84c9-db0bde4f956f" Feb 02 13:12:48 crc kubenswrapper[4721]: I0202 13:12:48.409932 4721 scope.go:117] "RemoveContainer" containerID="c98892a7ff179bcba871f45746b3c85a83090c01da93d13aeaeb2a282472689d" Feb 02 13:12:48 crc kubenswrapper[4721]: I0202 13:12:48.928972 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ltw7d_5ba84858-caaa-4fba-8eaf-9f7ddece0b3a/kube-multus/2.log" Feb 02 13:12:48 crc kubenswrapper[4721]: I0202 13:12:48.929330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ltw7d" event={"ID":"5ba84858-caaa-4fba-8eaf-9f7ddece0b3a","Type":"ContainerStarted","Data":"2ac6b70129b83205e808ed8ecdac9b06fb0b8752b06e7251666022066e956189"} Feb 02 13:12:51 crc kubenswrapper[4721]: I0202 13:12:51.742423 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8dtnt" Feb 02 13:12:52 crc kubenswrapper[4721]: I0202 13:12:52.409617 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:52 crc kubenswrapper[4721]: I0202 13:12:52.410046 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" Feb 02 13:12:52 crc kubenswrapper[4721]: I0202 13:12:52.787722 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt"] Feb 02 13:12:52 crc kubenswrapper[4721]: W0202 13:12:52.795256 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a5c0d6_c773_4914_a3b1_1654a51817a9.slice/crio-2f9c4789155105e7d051b78b863e88c79803180d46590922638146e7c812eb92 WatchSource:0}: Error finding container 2f9c4789155105e7d051b78b863e88c79803180d46590922638146e7c812eb92: Status 404 returned error can't find the container with id 2f9c4789155105e7d051b78b863e88c79803180d46590922638146e7c812eb92 Feb 02 13:12:52 crc kubenswrapper[4721]: I0202 13:12:52.958385 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" event={"ID":"30a5c0d6-c773-4914-a3b1-1654a51817a9","Type":"ContainerStarted","Data":"2f9c4789155105e7d051b78b863e88c79803180d46590922638146e7c812eb92"} Feb 02 13:12:54 crc kubenswrapper[4721]: I0202 13:12:54.409174 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:54 crc kubenswrapper[4721]: I0202 13:12:54.410256 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:54 crc kubenswrapper[4721]: I0202 13:12:54.601376 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5w6sx"] Feb 02 13:12:54 crc kubenswrapper[4721]: I0202 13:12:54.968852 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" event={"ID":"a3affad2-ab35-4604-8239-56f69bf3727f","Type":"ContainerStarted","Data":"41cd1333279decf96c22587391e7658b7932c4b5392a3c08ce0322d97d840237"} Feb 02 13:12:56 crc kubenswrapper[4721]: I0202 13:12:56.409755 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:56 crc kubenswrapper[4721]: I0202 13:12:56.410610 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:12:57 crc kubenswrapper[4721]: I0202 13:12:57.409276 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:57 crc kubenswrapper[4721]: I0202 13:12:57.409781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" Feb 02 13:12:58 crc kubenswrapper[4721]: I0202 13:12:58.983160 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b2dk9"] Feb 02 13:12:58 crc kubenswrapper[4721]: W0202 13:12:58.986935 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac0e2d1_4762_4c40_84c9_db0bde4f956f.slice/crio-4dba87266c45f42bb89bafe5ce1d1c90370220ef5d61cb2348e210117f8e1d61 WatchSource:0}: Error finding container 4dba87266c45f42bb89bafe5ce1d1c90370220ef5d61cb2348e210117f8e1d61: Status 404 returned error can't find the container with id 4dba87266c45f42bb89bafe5ce1d1c90370220ef5d61cb2348e210117f8e1d61 Feb 02 13:12:58 crc kubenswrapper[4721]: I0202 13:12:58.998369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" event={"ID":"7ac0e2d1-4762-4c40-84c9-db0bde4f956f","Type":"ContainerStarted","Data":"4dba87266c45f42bb89bafe5ce1d1c90370220ef5d61cb2348e210117f8e1d61"} Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.000720 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" event={"ID":"a3affad2-ab35-4604-8239-56f69bf3727f","Type":"ContainerStarted","Data":"11b8000d94074919c4278e3638fecaea14a5fdd31740eb1c45cae3b16a9bb50f"} Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.000819 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.025741 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" podStartSLOduration=28.845120914 podStartE2EDuration="33.025711675s" podCreationTimestamp="2026-02-02 13:12:26 +0000 UTC" firstStartedPulling="2026-02-02 13:12:54.621529743 +0000 UTC m=+714.924044132" lastFinishedPulling="2026-02-02 13:12:58.802120504 +0000 UTC m=+719.104634893" observedRunningTime="2026-02-02 13:12:59.018935686 +0000 UTC m=+719.321450095" watchObservedRunningTime="2026-02-02 13:12:59.025711675 +0000 UTC m=+719.328226064" Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.044221 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9"] Feb 02 13:12:59 crc kubenswrapper[4721]: W0202 13:12:59.050385 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41c83c75_cfc8_4c33_97cf_484cc7dcd812.slice/crio-1b2dc7067a50e7f3818fcaca5753f83828b37c0c088154ab826e4b03158e2ed0 WatchSource:0}: Error finding container 1b2dc7067a50e7f3818fcaca5753f83828b37c0c088154ab826e4b03158e2ed0: Status 404 returned error can't find the container with id 1b2dc7067a50e7f3818fcaca5753f83828b37c0c088154ab826e4b03158e2ed0 Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.409145 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.409678 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" Feb 02 13:12:59 crc kubenswrapper[4721]: I0202 13:12:59.812038 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx"] Feb 02 13:12:59 crc kubenswrapper[4721]: W0202 13:12:59.821340 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f92c36_0e50_485e_a728_7b42f1ab44c4.slice/crio-5ec6a4c877d359b7d81f48561e68abb494f6d6c9455548b3b8e86b701d39f701 WatchSource:0}: Error finding container 5ec6a4c877d359b7d81f48561e68abb494f6d6c9455548b3b8e86b701d39f701: Status 404 returned error can't find the container with id 5ec6a4c877d359b7d81f48561e68abb494f6d6c9455548b3b8e86b701d39f701 Feb 02 13:13:00 crc kubenswrapper[4721]: I0202 13:13:00.009024 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" event={"ID":"a1f92c36-0e50-485e-a728-7b42f1ab44c4","Type":"ContainerStarted","Data":"5ec6a4c877d359b7d81f48561e68abb494f6d6c9455548b3b8e86b701d39f701"} Feb 02 13:13:00 crc kubenswrapper[4721]: I0202 13:13:00.011958 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" event={"ID":"30a5c0d6-c773-4914-a3b1-1654a51817a9","Type":"ContainerStarted","Data":"b1d9f5aff26b9d078c5af4fbf2ee4d1cccb2f70e4ec53d06468a84b11e2b7df9"} Feb 02 13:13:00 crc kubenswrapper[4721]: I0202 13:13:00.015381 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" event={"ID":"41c83c75-cfc8-4c33-97cf-484cc7dcd812","Type":"ContainerStarted","Data":"1b2dc7067a50e7f3818fcaca5753f83828b37c0c088154ab826e4b03158e2ed0"} Feb 02 13:13:00 crc kubenswrapper[4721]: I0202 13:13:00.036740 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jltbt" podStartSLOduration=28.014187723 podStartE2EDuration="34.036718476s" podCreationTimestamp="2026-02-02 13:12:26 +0000 UTC" firstStartedPulling="2026-02-02 13:12:52.797034962 +0000 UTC m=+713.099549351" lastFinishedPulling="2026-02-02 13:12:58.819565715 +0000 UTC m=+719.122080104" observedRunningTime="2026-02-02 13:13:00.032831704 +0000 UTC m=+720.335346093" watchObservedRunningTime="2026-02-02 13:13:00.036718476 +0000 UTC m=+720.339232875" Feb 02 13:13:03 crc kubenswrapper[4721]: I0202 13:13:03.038461 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" event={"ID":"a1f92c36-0e50-485e-a728-7b42f1ab44c4","Type":"ContainerStarted","Data":"929a4b173449b94b30dab348faef189ffae53eab9fd6d0e48174ad4c4f8417a5"} Feb 02 13:13:03 crc kubenswrapper[4721]: I0202 13:13:03.041384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" event={"ID":"41c83c75-cfc8-4c33-97cf-484cc7dcd812","Type":"ContainerStarted","Data":"aad0efe57936a1531bdf3042e0f38c209b48ecbdfc3a30e837d7b60f96ab6d9b"} Feb 02 13:13:03 crc kubenswrapper[4721]: I0202 13:13:03.061124 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx" podStartSLOduration=34.963326261 podStartE2EDuration="37.061096713s" podCreationTimestamp="2026-02-02 13:12:26 +0000 UTC" firstStartedPulling="2026-02-02 13:12:59.823539201 +0000 UTC m=+720.126053600" lastFinishedPulling="2026-02-02 13:13:01.921309663 +0000 UTC m=+722.223824052" observedRunningTime="2026-02-02 13:13:03.053574835 +0000 UTC m=+723.356089234" watchObservedRunningTime="2026-02-02 13:13:03.061096713 +0000 UTC m=+723.363611112" Feb 02 13:13:03 crc kubenswrapper[4721]: I0202 13:13:03.100921 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9" podStartSLOduration=34.228947799 podStartE2EDuration="37.100896893s" podCreationTimestamp="2026-02-02 13:12:26 +0000 UTC" firstStartedPulling="2026-02-02 13:12:59.053347914 +0000 UTC m=+719.355862303" lastFinishedPulling="2026-02-02 13:13:01.925297008 +0000 UTC m=+722.227811397" observedRunningTime="2026-02-02 13:13:03.090935911 +0000 UTC m=+723.393450300" watchObservedRunningTime="2026-02-02 13:13:03.100896893 +0000 UTC m=+723.403411282" Feb 02 13:13:06 crc kubenswrapper[4721]: I0202 13:13:06.060298 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" event={"ID":"7ac0e2d1-4762-4c40-84c9-db0bde4f956f","Type":"ContainerStarted","Data":"584beccc490533b60eb3b45ff37e098ffc0a4a9261ddf27e505ca1c196da960a"} Feb 02 13:13:06 crc kubenswrapper[4721]: I0202 13:13:06.060936 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:13:06 crc kubenswrapper[4721]: I0202 13:13:06.067279 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" Feb 02 13:13:06 crc kubenswrapper[4721]: I0202 13:13:06.086788 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-b2dk9" podStartSLOduration=34.050964233 podStartE2EDuration="40.086766055s" podCreationTimestamp="2026-02-02 13:12:26 +0000 UTC" firstStartedPulling="2026-02-02 13:12:58.989897779 +0000 UTC m=+719.292412168" lastFinishedPulling="2026-02-02 13:13:05.025699601 +0000 UTC m=+725.328213990" observedRunningTime="2026-02-02 13:13:06.085280776 +0000 UTC m=+726.387795165" watchObservedRunningTime="2026-02-02 13:13:06.086766055 +0000 UTC m=+726.389280444" Feb 02 13:13:07 crc kubenswrapper[4721]: I0202 13:13:07.179725 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5w6sx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.522024 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.524605 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.530637 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.530840 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.530973 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6xznh" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.535212 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.543550 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qxmnx"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.544649 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qxmnx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.547440 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tcrxr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.559900 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qxmnx"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.585307 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xjhrq"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.586264 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.588264 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ng4mz" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.596121 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7cf\" (UniqueName: \"kubernetes.io/projected/caf3dcb7-c58d-4d36-9329-c9b8d3c354a8-kube-api-access-tv7cf\") pod \"cert-manager-858654f9db-qxmnx\" (UID: \"caf3dcb7-c58d-4d36-9329-c9b8d3c354a8\") " pod="cert-manager/cert-manager-858654f9db-qxmnx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.596297 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xjhrq"] Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.596484 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km82n\" (UniqueName: \"kubernetes.io/projected/988d3eab-804d-4db0-8855-b63ebbeabce4-kube-api-access-km82n\") pod \"cert-manager-cainjector-cf98fcc89-k5vmr\" (UID: \"988d3eab-804d-4db0-8855-b63ebbeabce4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.697835 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv7cf\" (UniqueName: \"kubernetes.io/projected/caf3dcb7-c58d-4d36-9329-c9b8d3c354a8-kube-api-access-tv7cf\") pod \"cert-manager-858654f9db-qxmnx\" (UID: \"caf3dcb7-c58d-4d36-9329-c9b8d3c354a8\") " pod="cert-manager/cert-manager-858654f9db-qxmnx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.697913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjlf\" (UniqueName: \"kubernetes.io/projected/aff1475e-36c5-471a-b04e-01cefc2d2763-kube-api-access-rjjlf\") pod \"cert-manager-webhook-687f57d79b-xjhrq\" (UID: \"aff1475e-36c5-471a-b04e-01cefc2d2763\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.697969 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km82n\" (UniqueName: \"kubernetes.io/projected/988d3eab-804d-4db0-8855-b63ebbeabce4-kube-api-access-km82n\") pod \"cert-manager-cainjector-cf98fcc89-k5vmr\" (UID: \"988d3eab-804d-4db0-8855-b63ebbeabce4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.716140 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km82n\" (UniqueName: \"kubernetes.io/projected/988d3eab-804d-4db0-8855-b63ebbeabce4-kube-api-access-km82n\") pod \"cert-manager-cainjector-cf98fcc89-k5vmr\" (UID: \"988d3eab-804d-4db0-8855-b63ebbeabce4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.716849 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv7cf\" (UniqueName: \"kubernetes.io/projected/caf3dcb7-c58d-4d36-9329-c9b8d3c354a8-kube-api-access-tv7cf\") pod \"cert-manager-858654f9db-qxmnx\" (UID: \"caf3dcb7-c58d-4d36-9329-c9b8d3c354a8\") " pod="cert-manager/cert-manager-858654f9db-qxmnx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.799045 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjlf\" (UniqueName: \"kubernetes.io/projected/aff1475e-36c5-471a-b04e-01cefc2d2763-kube-api-access-rjjlf\") pod \"cert-manager-webhook-687f57d79b-xjhrq\" (UID: \"aff1475e-36c5-471a-b04e-01cefc2d2763\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.815543 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjlf\" (UniqueName: \"kubernetes.io/projected/aff1475e-36c5-471a-b04e-01cefc2d2763-kube-api-access-rjjlf\") pod \"cert-manager-webhook-687f57d79b-xjhrq\" (UID: \"aff1475e-36c5-471a-b04e-01cefc2d2763\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.843144 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.861368 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qxmnx" Feb 02 13:13:13 crc kubenswrapper[4721]: I0202 13:13:13.901596 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:14 crc kubenswrapper[4721]: I0202 13:13:14.325909 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr"] Feb 02 13:13:14 crc kubenswrapper[4721]: I0202 13:13:14.390110 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xjhrq"] Feb 02 13:13:14 crc kubenswrapper[4721]: W0202 13:13:14.390932 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff1475e_36c5_471a_b04e_01cefc2d2763.slice/crio-949ec2ec85c9aeb5fc90535cc0e3929a004fa0cdd739d85863eb24d30e78dee0 WatchSource:0}: Error finding container 949ec2ec85c9aeb5fc90535cc0e3929a004fa0cdd739d85863eb24d30e78dee0: Status 404 returned error can't find the container with id 949ec2ec85c9aeb5fc90535cc0e3929a004fa0cdd739d85863eb24d30e78dee0 Feb 02 13:13:14 crc kubenswrapper[4721]: W0202 13:13:14.422293 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf3dcb7_c58d_4d36_9329_c9b8d3c354a8.slice/crio-a786d4cdab57400c86d985b797499e7d1b54c4388c492e8f1ee09ec1091cf5d2 WatchSource:0}: Error finding container a786d4cdab57400c86d985b797499e7d1b54c4388c492e8f1ee09ec1091cf5d2: Status 404 returned error can't find the container with id a786d4cdab57400c86d985b797499e7d1b54c4388c492e8f1ee09ec1091cf5d2 Feb 02 13:13:14 crc kubenswrapper[4721]: I0202 13:13:14.446920 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qxmnx"] Feb 02 13:13:15 crc kubenswrapper[4721]: I0202 13:13:15.113560 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" event={"ID":"aff1475e-36c5-471a-b04e-01cefc2d2763","Type":"ContainerStarted","Data":"949ec2ec85c9aeb5fc90535cc0e3929a004fa0cdd739d85863eb24d30e78dee0"} Feb 02 13:13:15 crc kubenswrapper[4721]: I0202 13:13:15.115220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qxmnx" event={"ID":"caf3dcb7-c58d-4d36-9329-c9b8d3c354a8","Type":"ContainerStarted","Data":"a786d4cdab57400c86d985b797499e7d1b54c4388c492e8f1ee09ec1091cf5d2"} Feb 02 13:13:15 crc kubenswrapper[4721]: I0202 13:13:15.116332 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" event={"ID":"988d3eab-804d-4db0-8855-b63ebbeabce4","Type":"ContainerStarted","Data":"21844476f36dfaca63d9ed98ea9c80e9eb9f93d3b4829b98fa17459c28d6bef7"} Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.140737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" event={"ID":"aff1475e-36c5-471a-b04e-01cefc2d2763","Type":"ContainerStarted","Data":"4fc152b4b8f486654322c2ba4bfd1ebfc4890305d331d879a859186db0a95657"} Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.141782 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.142945 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qxmnx" event={"ID":"caf3dcb7-c58d-4d36-9329-c9b8d3c354a8","Type":"ContainerStarted","Data":"af4951d27df17a0c60d26dc17558c3dee39a008d040fd3a78623db0ab51f9626"} Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.145538 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" event={"ID":"988d3eab-804d-4db0-8855-b63ebbeabce4","Type":"ContainerStarted","Data":"3bdbbf9b1e755b63af56cb73e4c71330c2abb937996e7c8c2d39a0720db91488"} Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.161953 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" podStartSLOduration=1.705942228 podStartE2EDuration="6.161927736s" podCreationTimestamp="2026-02-02 13:13:13 +0000 UTC" firstStartedPulling="2026-02-02 13:13:14.39759024 +0000 UTC m=+734.700104629" lastFinishedPulling="2026-02-02 13:13:18.853575748 +0000 UTC m=+739.156090137" observedRunningTime="2026-02-02 13:13:19.155967049 +0000 UTC m=+739.458481468" watchObservedRunningTime="2026-02-02 13:13:19.161927736 +0000 UTC m=+739.464442145" Feb 02 13:13:19 crc kubenswrapper[4721]: I0202 13:13:19.173444 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qxmnx" podStartSLOduration=1.690537812 podStartE2EDuration="6.17342287s" podCreationTimestamp="2026-02-02 13:13:13 +0000 UTC" firstStartedPulling="2026-02-02 13:13:14.424403007 +0000 UTC m=+734.726917386" lastFinishedPulling="2026-02-02 13:13:18.907288055 +0000 UTC m=+739.209802444" observedRunningTime="2026-02-02 13:13:19.170393099 +0000 UTC m=+739.472907488" watchObservedRunningTime="2026-02-02 13:13:19.17342287 +0000 UTC m=+739.475937279" Feb 02 13:13:28 crc kubenswrapper[4721]: I0202 13:13:28.905194 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xjhrq" Feb 02 13:13:28 crc kubenswrapper[4721]: I0202 13:13:28.923610 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-k5vmr" podStartSLOduration=11.34898778 podStartE2EDuration="15.923588959s" podCreationTimestamp="2026-02-02 13:13:13 +0000 UTC" firstStartedPulling="2026-02-02 13:13:14.339055625 +0000 UTC m=+734.641570014" lastFinishedPulling="2026-02-02 13:13:18.913656804 +0000 UTC m=+739.216171193" observedRunningTime="2026-02-02 13:13:19.199619771 +0000 UTC m=+739.502134160" watchObservedRunningTime="2026-02-02 13:13:28.923588959 +0000 UTC m=+749.226103358" Feb 02 13:13:45 crc kubenswrapper[4721]: I0202 13:13:45.446004 4721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.335057 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.343443 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.379480 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.414678 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.415046 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw8s\" (UniqueName: \"kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.415197 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.516350 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw8s\" (UniqueName: \"kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.517555 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.522236 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.522870 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.520863 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.544409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw8s\" (UniqueName: \"kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s\") pod \"community-operators-ngvnj\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.693998 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:13:53 crc kubenswrapper[4721]: I0202 13:13:53.938590 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:13:54 crc kubenswrapper[4721]: I0202 13:13:54.370116 4721 generic.go:334] "Generic (PLEG): container finished" podID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerID="992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d" exitCode=0 Feb 02 13:13:54 crc kubenswrapper[4721]: I0202 13:13:54.370162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerDied","Data":"992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d"} Feb 02 13:13:54 crc kubenswrapper[4721]: I0202 13:13:54.370193 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerStarted","Data":"8c61895cb6dd6e4aec63d8237d91407d9d43fbbfa704efe9db664fadf6487376"} Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.348150 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n"] Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.349353 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.351305 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.378331 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerStarted","Data":"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe"} Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.395574 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n"] Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.549630 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.549733 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcfv\" (UniqueName: \"kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.549802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.651249 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.651315 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcfv\" (UniqueName: \"kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.651364 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.651771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.651789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.671004 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcfv\" (UniqueName: \"kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.766984 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7"] Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.768891 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.783630 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7"] Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.955887 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.955948 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlks\" (UniqueName: \"kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.955971 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:55 crc kubenswrapper[4721]: I0202 13:13:55.963893 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.057003 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.057177 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.057229 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlks\" (UniqueName: \"kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.058829 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.058861 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.074903 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlks\" (UniqueName: \"kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.094808 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.163174 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n"] Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.393175 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" event={"ID":"4b23cf05-2074-4e51-b6ef-235b207d8b16","Type":"ContainerStarted","Data":"70890d5b395f40fb527c5817ac958b85be98e7537656d46714bf54f470595c89"} Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.398806 4721 generic.go:334] "Generic (PLEG): container finished" podID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerID="56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe" exitCode=0 Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.398851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerDied","Data":"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe"} Feb 02 13:13:56 crc kubenswrapper[4721]: I0202 13:13:56.562832 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7"] Feb 02 13:13:56 crc kubenswrapper[4721]: W0202 13:13:56.570144 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53fd5f54_e7fe_4d86_a5b7_3583e945fff3.slice/crio-74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21 WatchSource:0}: Error finding container 74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21: Status 404 returned error can't find the container with id 74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21 Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.406526 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerStarted","Data":"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822"} Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.408025 4721 generic.go:334] "Generic (PLEG): container finished" podID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerID="312ecdfe1b8de131ebb4e994ad7253a242238c9f1bc7b82397680434a40896a4" exitCode=0 Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.408126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" event={"ID":"53fd5f54-e7fe-4d86-a5b7-3583e945fff3","Type":"ContainerDied","Data":"312ecdfe1b8de131ebb4e994ad7253a242238c9f1bc7b82397680434a40896a4"} Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.408204 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" event={"ID":"53fd5f54-e7fe-4d86-a5b7-3583e945fff3","Type":"ContainerStarted","Data":"74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21"} Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.409525 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerID="9e659b420eb15f9adab7ad44219ca5d9df8a5bf8c29f5b8ff96c57b42e14f33d" exitCode=0 Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.409563 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" event={"ID":"4b23cf05-2074-4e51-b6ef-235b207d8b16","Type":"ContainerDied","Data":"9e659b420eb15f9adab7ad44219ca5d9df8a5bf8c29f5b8ff96c57b42e14f33d"} Feb 02 13:13:57 crc kubenswrapper[4721]: I0202 13:13:57.426575 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngvnj" podStartSLOduration=2.013986912 podStartE2EDuration="4.426555793s" podCreationTimestamp="2026-02-02 13:13:53 +0000 UTC" firstStartedPulling="2026-02-02 13:13:54.37237461 +0000 UTC m=+774.674888999" lastFinishedPulling="2026-02-02 13:13:56.784943481 +0000 UTC m=+777.087457880" observedRunningTime="2026-02-02 13:13:57.423189825 +0000 UTC m=+777.725704224" watchObservedRunningTime="2026-02-02 13:13:57.426555793 +0000 UTC m=+777.729070182" Feb 02 13:13:59 crc kubenswrapper[4721]: I0202 13:13:59.433942 4721 generic.go:334] "Generic (PLEG): container finished" podID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerID="5a6f2159311eaef61e3383c7faa5c40f41ba2c020d3b11829efc19fcfbe6797f" exitCode=0 Feb 02 13:13:59 crc kubenswrapper[4721]: I0202 13:13:59.434031 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" event={"ID":"53fd5f54-e7fe-4d86-a5b7-3583e945fff3","Type":"ContainerDied","Data":"5a6f2159311eaef61e3383c7faa5c40f41ba2c020d3b11829efc19fcfbe6797f"} Feb 02 13:13:59 crc kubenswrapper[4721]: I0202 13:13:59.437486 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerID="1c339f583f4558704296e38b771f88fc05eec64bae8e0485efe01b990760ac0a" exitCode=0 Feb 02 13:13:59 crc kubenswrapper[4721]: I0202 13:13:59.437575 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" event={"ID":"4b23cf05-2074-4e51-b6ef-235b207d8b16","Type":"ContainerDied","Data":"1c339f583f4558704296e38b771f88fc05eec64bae8e0485efe01b990760ac0a"} Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.316542 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.317987 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.332945 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.427671 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshh8\" (UniqueName: \"kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.428002 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.428353 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.447485 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerID="aa72e2308b19fb71156a3c15943f1bc96bc8016ab1843ff013ca64b8faceff12" exitCode=0 Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.447572 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" event={"ID":"4b23cf05-2074-4e51-b6ef-235b207d8b16","Type":"ContainerDied","Data":"aa72e2308b19fb71156a3c15943f1bc96bc8016ab1843ff013ca64b8faceff12"} Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.452603 4721 generic.go:334] "Generic (PLEG): container finished" podID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerID="2a275a934a34e5cc5ab2dd6a550b7015bc3b971f0a41965316c7b176c3891331" exitCode=0 Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.452641 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" event={"ID":"53fd5f54-e7fe-4d86-a5b7-3583e945fff3","Type":"ContainerDied","Data":"2a275a934a34e5cc5ab2dd6a550b7015bc3b971f0a41965316c7b176c3891331"} Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.530146 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.530525 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshh8\" (UniqueName: \"kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.530670 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.530789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.531129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.551105 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshh8\" (UniqueName: \"kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8\") pod \"redhat-operators-jwkvv\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.634085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:00 crc kubenswrapper[4721]: I0202 13:14:00.883729 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:00 crc kubenswrapper[4721]: W0202 13:14:00.889002 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0240e395_0a12_40c4_b5e6_b31168b303ab.slice/crio-b9b8aa71e9272c3a83f81ba3e2d7a8de409acbf418a29cb8e7063ac3659e42e9 WatchSource:0}: Error finding container b9b8aa71e9272c3a83f81ba3e2d7a8de409acbf418a29cb8e7063ac3659e42e9: Status 404 returned error can't find the container with id b9b8aa71e9272c3a83f81ba3e2d7a8de409acbf418a29cb8e7063ac3659e42e9 Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.459549 4721 generic.go:334] "Generic (PLEG): container finished" podID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerID="3fdfe29f698f72ad98e08f0fea56f5e1ea04671f512653d91c811ab010ecba19" exitCode=0 Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.459609 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerDied","Data":"3fdfe29f698f72ad98e08f0fea56f5e1ea04671f512653d91c811ab010ecba19"} Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.459649 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerStarted","Data":"b9b8aa71e9272c3a83f81ba3e2d7a8de409acbf418a29cb8e7063ac3659e42e9"} Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.813246 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.815308 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.846803 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcfv\" (UniqueName: \"kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv\") pod \"4b23cf05-2074-4e51-b6ef-235b207d8b16\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.846840 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util\") pod \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.846940 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle\") pod \"4b23cf05-2074-4e51-b6ef-235b207d8b16\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.846959 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzlks\" (UniqueName: \"kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks\") pod \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.847023 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util\") pod \"4b23cf05-2074-4e51-b6ef-235b207d8b16\" (UID: \"4b23cf05-2074-4e51-b6ef-235b207d8b16\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.847040 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle\") pod \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\" (UID: \"53fd5f54-e7fe-4d86-a5b7-3583e945fff3\") " Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.848033 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle" (OuterVolumeSpecName: "bundle") pod "4b23cf05-2074-4e51-b6ef-235b207d8b16" (UID: "4b23cf05-2074-4e51-b6ef-235b207d8b16"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.848383 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle" (OuterVolumeSpecName: "bundle") pod "53fd5f54-e7fe-4d86-a5b7-3583e945fff3" (UID: "53fd5f54-e7fe-4d86-a5b7-3583e945fff3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.854398 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv" (OuterVolumeSpecName: "kube-api-access-dfcfv") pod "4b23cf05-2074-4e51-b6ef-235b207d8b16" (UID: "4b23cf05-2074-4e51-b6ef-235b207d8b16"). InnerVolumeSpecName "kube-api-access-dfcfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.854424 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks" (OuterVolumeSpecName: "kube-api-access-hzlks") pod "53fd5f54-e7fe-4d86-a5b7-3583e945fff3" (UID: "53fd5f54-e7fe-4d86-a5b7-3583e945fff3"). InnerVolumeSpecName "kube-api-access-hzlks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.866994 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util" (OuterVolumeSpecName: "util") pod "53fd5f54-e7fe-4d86-a5b7-3583e945fff3" (UID: "53fd5f54-e7fe-4d86-a5b7-3583e945fff3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.887884 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util" (OuterVolumeSpecName: "util") pod "4b23cf05-2074-4e51-b6ef-235b207d8b16" (UID: "4b23cf05-2074-4e51-b6ef-235b207d8b16"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948746 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948785 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzlks\" (UniqueName: \"kubernetes.io/projected/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-kube-api-access-hzlks\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948800 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b23cf05-2074-4e51-b6ef-235b207d8b16-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948812 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948826 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcfv\" (UniqueName: \"kubernetes.io/projected/4b23cf05-2074-4e51-b6ef-235b207d8b16-kube-api-access-dfcfv\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:01 crc kubenswrapper[4721]: I0202 13:14:01.948839 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53fd5f54-e7fe-4d86-a5b7-3583e945fff3-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.467660 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" event={"ID":"53fd5f54-e7fe-4d86-a5b7-3583e945fff3","Type":"ContainerDied","Data":"74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21"} Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.467699 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c7d987997096b66b94896c6c743a531ed69cc553ca19de86a558110a827f21" Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.467678 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7" Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.471348 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" event={"ID":"4b23cf05-2074-4e51-b6ef-235b207d8b16","Type":"ContainerDied","Data":"70890d5b395f40fb527c5817ac958b85be98e7537656d46714bf54f470595c89"} Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.471382 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70890d5b395f40fb527c5817ac958b85be98e7537656d46714bf54f470595c89" Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.471387 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n" Feb 02 13:14:02 crc kubenswrapper[4721]: I0202 13:14:02.475220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerStarted","Data":"67cb9052470ca9d2b58d9c98a6bbe0181b292c87088d890cf20bcc66f8f8c5b1"} Feb 02 13:14:03 crc kubenswrapper[4721]: I0202 13:14:03.483219 4721 generic.go:334] "Generic (PLEG): container finished" podID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerID="67cb9052470ca9d2b58d9c98a6bbe0181b292c87088d890cf20bcc66f8f8c5b1" exitCode=0 Feb 02 13:14:03 crc kubenswrapper[4721]: I0202 13:14:03.483463 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerDied","Data":"67cb9052470ca9d2b58d9c98a6bbe0181b292c87088d890cf20bcc66f8f8c5b1"} Feb 02 13:14:03 crc kubenswrapper[4721]: I0202 13:14:03.694420 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:03 crc kubenswrapper[4721]: I0202 13:14:03.694464 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:03 crc kubenswrapper[4721]: I0202 13:14:03.736512 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:04 crc kubenswrapper[4721]: I0202 13:14:04.492748 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerStarted","Data":"d553ccad788dc78c0a62e5fdc6b75b154047af412e476df76d3a46cdee3d9f1e"} Feb 02 13:14:04 crc kubenswrapper[4721]: I0202 13:14:04.515446 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwkvv" podStartSLOduration=2.023563493 podStartE2EDuration="4.515425717s" podCreationTimestamp="2026-02-02 13:14:00 +0000 UTC" firstStartedPulling="2026-02-02 13:14:01.461340756 +0000 UTC m=+781.763855145" lastFinishedPulling="2026-02-02 13:14:03.95320296 +0000 UTC m=+784.255717369" observedRunningTime="2026-02-02 13:14:04.511476144 +0000 UTC m=+784.813990533" watchObservedRunningTime="2026-02-02 13:14:04.515425717 +0000 UTC m=+784.817940106" Feb 02 13:14:04 crc kubenswrapper[4721]: I0202 13:14:04.532425 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:07 crc kubenswrapper[4721]: I0202 13:14:07.711779 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:14:07 crc kubenswrapper[4721]: I0202 13:14:07.711996 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngvnj" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="registry-server" containerID="cri-o://8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822" gracePeriod=2 Feb 02 13:14:08 crc kubenswrapper[4721]: E0202 13:14:08.796804 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4bc9245_cc92_4fa8_a195_74b2c1fa3018.slice/crio-conmon-8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.286639 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.365601 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities\") pod \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.365719 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content\") pod \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.365813 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsw8s\" (UniqueName: \"kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s\") pod \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\" (UID: \"e4bc9245-cc92-4fa8-a195-74b2c1fa3018\") " Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.366625 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities" (OuterVolumeSpecName: "utilities") pod "e4bc9245-cc92-4fa8-a195-74b2c1fa3018" (UID: "e4bc9245-cc92-4fa8-a195-74b2c1fa3018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.371263 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s" (OuterVolumeSpecName: "kube-api-access-fsw8s") pod "e4bc9245-cc92-4fa8-a195-74b2c1fa3018" (UID: "e4bc9245-cc92-4fa8-a195-74b2c1fa3018"). InnerVolumeSpecName "kube-api-access-fsw8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.421482 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4bc9245-cc92-4fa8-a195-74b2c1fa3018" (UID: "e4bc9245-cc92-4fa8-a195-74b2c1fa3018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.467142 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsw8s\" (UniqueName: \"kubernetes.io/projected/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-kube-api-access-fsw8s\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.467185 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.467196 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bc9245-cc92-4fa8-a195-74b2c1fa3018-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.521187 4721 generic.go:334] "Generic (PLEG): container finished" podID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerID="8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822" exitCode=0 Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.521226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerDied","Data":"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822"} Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.521260 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngvnj" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.521273 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngvnj" event={"ID":"e4bc9245-cc92-4fa8-a195-74b2c1fa3018","Type":"ContainerDied","Data":"8c61895cb6dd6e4aec63d8237d91407d9d43fbbfa704efe9db664fadf6487376"} Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.521290 4721 scope.go:117] "RemoveContainer" containerID="8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.542296 4721 scope.go:117] "RemoveContainer" containerID="56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.557442 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.563355 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngvnj"] Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.578689 4721 scope.go:117] "RemoveContainer" containerID="992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.595795 4721 scope.go:117] "RemoveContainer" containerID="8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822" Feb 02 13:14:09 crc kubenswrapper[4721]: E0202 13:14:09.596226 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822\": container with ID starting with 8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822 not found: ID does not exist" containerID="8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.596274 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822"} err="failed to get container status \"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822\": rpc error: code = NotFound desc = could not find container \"8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822\": container with ID starting with 8d592e0cd2e9861f1bf77ee911de6dc93a57e75ad1d2d6069f5f8a4d50abb822 not found: ID does not exist" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.596303 4721 scope.go:117] "RemoveContainer" containerID="56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe" Feb 02 13:14:09 crc kubenswrapper[4721]: E0202 13:14:09.596864 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe\": container with ID starting with 56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe not found: ID does not exist" containerID="56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.596980 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe"} err="failed to get container status \"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe\": rpc error: code = NotFound desc = could not find container \"56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe\": container with ID starting with 56eb0bf573c34b6796229e31bbdcd14a34f6bd281bbf7f087dc4989701986dbe not found: ID does not exist" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.597095 4721 scope.go:117] "RemoveContainer" containerID="992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d" Feb 02 13:14:09 crc kubenswrapper[4721]: E0202 13:14:09.598310 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d\": container with ID starting with 992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d not found: ID does not exist" containerID="992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d" Feb 02 13:14:09 crc kubenswrapper[4721]: I0202 13:14:09.598347 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d"} err="failed to get container status \"992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d\": rpc error: code = NotFound desc = could not find container \"992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d\": container with ID starting with 992bd2a820ea0937516ce359186e2efc41e06c8257ba292dc7b7f5246e9d846d not found: ID does not exist" Feb 02 13:14:10 crc kubenswrapper[4721]: I0202 13:14:10.417879 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" path="/var/lib/kubelet/pods/e4bc9245-cc92-4fa8-a195-74b2c1fa3018/volumes" Feb 02 13:14:10 crc kubenswrapper[4721]: I0202 13:14:10.634346 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:10 crc kubenswrapper[4721]: I0202 13:14:10.634717 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:10 crc kubenswrapper[4721]: I0202 13:14:10.677996 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:11 crc kubenswrapper[4721]: I0202 13:14:11.582195 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.137991 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5"] Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138298 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="pull" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138320 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="pull" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138338 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="extract-content" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138347 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="extract-content" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138356 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="util" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138365 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="util" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138381 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="registry-server" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138388 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="registry-server" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138402 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="extract-utilities" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138410 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="extract-utilities" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138420 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="pull" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138427 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="pull" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138437 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138444 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138459 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138467 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: E0202 13:14:12.138482 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="util" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138489 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="util" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138631 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bc9245-cc92-4fa8-a195-74b2c1fa3018" containerName="registry-server" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138645 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fd5f54-e7fe-4d86-a5b7-3583e945fff3" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.138657 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b23cf05-2074-4e51-b6ef-235b207d8b16" containerName="extract" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.139334 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.141403 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-nqmwn" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.141670 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.142115 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.142181 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.142213 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.148162 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.156261 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5"] Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.202588 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mq8\" (UniqueName: \"kubernetes.io/projected/af1123f2-fce6-410b-a82b-9b292bb8bf68-kube-api-access-c4mq8\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.202669 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.202700 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/af1123f2-fce6-410b-a82b-9b292bb8bf68-manager-config\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.202924 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-apiservice-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.203052 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-webhook-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.304350 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-apiservice-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.304410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-webhook-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.304435 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mq8\" (UniqueName: \"kubernetes.io/projected/af1123f2-fce6-410b-a82b-9b292bb8bf68-kube-api-access-c4mq8\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.304472 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.304497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/af1123f2-fce6-410b-a82b-9b292bb8bf68-manager-config\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.305540 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/af1123f2-fce6-410b-a82b-9b292bb8bf68-manager-config\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.317014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-apiservice-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.329111 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-webhook-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.331828 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af1123f2-fce6-410b-a82b-9b292bb8bf68-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.337035 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mq8\" (UniqueName: \"kubernetes.io/projected/af1123f2-fce6-410b-a82b-9b292bb8bf68-kube-api-access-c4mq8\") pod \"loki-operator-controller-manager-756566789b-zpsf5\" (UID: \"af1123f2-fce6-410b-a82b-9b292bb8bf68\") " pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.459976 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:12 crc kubenswrapper[4721]: I0202 13:14:12.768672 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5"] Feb 02 13:14:12 crc kubenswrapper[4721]: W0202 13:14:12.780229 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf1123f2_fce6_410b_a82b_9b292bb8bf68.slice/crio-59d16b5f698762c1653aab76a89de977017af837e0a4202956105b348b163a34 WatchSource:0}: Error finding container 59d16b5f698762c1653aab76a89de977017af837e0a4202956105b348b163a34: Status 404 returned error can't find the container with id 59d16b5f698762c1653aab76a89de977017af837e0a4202956105b348b163a34 Feb 02 13:14:13 crc kubenswrapper[4721]: I0202 13:14:13.550596 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" event={"ID":"af1123f2-fce6-410b-a82b-9b292bb8bf68","Type":"ContainerStarted","Data":"59d16b5f698762c1653aab76a89de977017af837e0a4202956105b348b163a34"} Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.764269 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.764336 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.926267 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv"] Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.927184 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.929613 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.929674 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.930087 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-rs8dq" Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.944738 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv"] Feb 02 13:14:14 crc kubenswrapper[4721]: I0202 13:14:14.947580 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xxl\" (UniqueName: \"kubernetes.io/projected/f9c5d281-206d-4729-a031-feb5b9234c8f-kube-api-access-v5xxl\") pod \"cluster-logging-operator-79cf69ddc8-9sfnv\" (UID: \"f9c5d281-206d-4729-a031-feb5b9234c8f\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.049480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xxl\" (UniqueName: \"kubernetes.io/projected/f9c5d281-206d-4729-a031-feb5b9234c8f-kube-api-access-v5xxl\") pod \"cluster-logging-operator-79cf69ddc8-9sfnv\" (UID: \"f9c5d281-206d-4729-a031-feb5b9234c8f\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.073991 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xxl\" (UniqueName: \"kubernetes.io/projected/f9c5d281-206d-4729-a031-feb5b9234c8f-kube-api-access-v5xxl\") pod \"cluster-logging-operator-79cf69ddc8-9sfnv\" (UID: \"f9c5d281-206d-4729-a031-feb5b9234c8f\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.107690 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.108807 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jwkvv" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="registry-server" containerID="cri-o://d553ccad788dc78c0a62e5fdc6b75b154047af412e476df76d3a46cdee3d9f1e" gracePeriod=2 Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.279419 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.604680 4721 generic.go:334] "Generic (PLEG): container finished" podID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerID="d553ccad788dc78c0a62e5fdc6b75b154047af412e476df76d3a46cdee3d9f1e" exitCode=0 Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.605159 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerDied","Data":"d553ccad788dc78c0a62e5fdc6b75b154047af412e476df76d3a46cdee3d9f1e"} Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.643734 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.660104 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities\") pod \"0240e395-0a12-40c4-b5e6-b31168b303ab\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.660200 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content\") pod \"0240e395-0a12-40c4-b5e6-b31168b303ab\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.660240 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshh8\" (UniqueName: \"kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8\") pod \"0240e395-0a12-40c4-b5e6-b31168b303ab\" (UID: \"0240e395-0a12-40c4-b5e6-b31168b303ab\") " Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.661017 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities" (OuterVolumeSpecName: "utilities") pod "0240e395-0a12-40c4-b5e6-b31168b303ab" (UID: "0240e395-0a12-40c4-b5e6-b31168b303ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.670897 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8" (OuterVolumeSpecName: "kube-api-access-hshh8") pod "0240e395-0a12-40c4-b5e6-b31168b303ab" (UID: "0240e395-0a12-40c4-b5e6-b31168b303ab"). InnerVolumeSpecName "kube-api-access-hshh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.682899 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv"] Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.761681 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.761712 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshh8\" (UniqueName: \"kubernetes.io/projected/0240e395-0a12-40c4-b5e6-b31168b303ab-kube-api-access-hshh8\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.844177 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0240e395-0a12-40c4-b5e6-b31168b303ab" (UID: "0240e395-0a12-40c4-b5e6-b31168b303ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:15 crc kubenswrapper[4721]: I0202 13:14:15.864374 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e395-0a12-40c4-b5e6-b31168b303ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.613130 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" event={"ID":"f9c5d281-206d-4729-a031-feb5b9234c8f","Type":"ContainerStarted","Data":"f952ffe5054eaddc53bbf21a0ed35d42abd52a1957b93939bfbaa5c325764b2e"} Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.615282 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwkvv" event={"ID":"0240e395-0a12-40c4-b5e6-b31168b303ab","Type":"ContainerDied","Data":"b9b8aa71e9272c3a83f81ba3e2d7a8de409acbf418a29cb8e7063ac3659e42e9"} Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.615339 4721 scope.go:117] "RemoveContainer" containerID="d553ccad788dc78c0a62e5fdc6b75b154047af412e476df76d3a46cdee3d9f1e" Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.615439 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwkvv" Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.637725 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:16 crc kubenswrapper[4721]: I0202 13:14:16.642744 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jwkvv"] Feb 02 13:14:18 crc kubenswrapper[4721]: I0202 13:14:18.427875 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" path="/var/lib/kubelet/pods/0240e395-0a12-40c4-b5e6-b31168b303ab/volumes" Feb 02 13:14:18 crc kubenswrapper[4721]: I0202 13:14:18.478600 4721 scope.go:117] "RemoveContainer" containerID="67cb9052470ca9d2b58d9c98a6bbe0181b292c87088d890cf20bcc66f8f8c5b1" Feb 02 13:14:18 crc kubenswrapper[4721]: I0202 13:14:18.503559 4721 scope.go:117] "RemoveContainer" containerID="3fdfe29f698f72ad98e08f0fea56f5e1ea04671f512653d91c811ab010ecba19" Feb 02 13:14:19 crc kubenswrapper[4721]: I0202 13:14:19.643121 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" event={"ID":"af1123f2-fce6-410b-a82b-9b292bb8bf68","Type":"ContainerStarted","Data":"ff92a3c5e4759b22495907be8527b3528c9739486111eb31ba8deee32f0fab14"} Feb 02 13:14:27 crc kubenswrapper[4721]: I0202 13:14:27.692643 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" event={"ID":"f9c5d281-206d-4729-a031-feb5b9234c8f","Type":"ContainerStarted","Data":"ba5f0130ce3b917509918bb04b21db20e35aa5b25888b40e07c39452af518a77"} Feb 02 13:14:27 crc kubenswrapper[4721]: I0202 13:14:27.698730 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" event={"ID":"af1123f2-fce6-410b-a82b-9b292bb8bf68","Type":"ContainerStarted","Data":"f13de78a119377f101d453a3ce4e9a78e1680646f62d40fcb81d8ed07dbb9210"} Feb 02 13:14:27 crc kubenswrapper[4721]: I0202 13:14:27.713034 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-9sfnv" podStartSLOduration=2.765682576 podStartE2EDuration="13.713015033s" podCreationTimestamp="2026-02-02 13:14:14 +0000 UTC" firstStartedPulling="2026-02-02 13:14:15.725611959 +0000 UTC m=+796.028126348" lastFinishedPulling="2026-02-02 13:14:26.672944426 +0000 UTC m=+806.975458805" observedRunningTime="2026-02-02 13:14:27.709827019 +0000 UTC m=+808.012341408" watchObservedRunningTime="2026-02-02 13:14:27.713015033 +0000 UTC m=+808.015529422" Feb 02 13:14:27 crc kubenswrapper[4721]: I0202 13:14:27.750632 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" podStartSLOduration=1.784592833 podStartE2EDuration="15.750608845s" podCreationTimestamp="2026-02-02 13:14:12 +0000 UTC" firstStartedPulling="2026-02-02 13:14:12.782325382 +0000 UTC m=+793.084839771" lastFinishedPulling="2026-02-02 13:14:26.748341394 +0000 UTC m=+807.050855783" observedRunningTime="2026-02-02 13:14:27.747544894 +0000 UTC m=+808.050059293" watchObservedRunningTime="2026-02-02 13:14:27.750608845 +0000 UTC m=+808.053123244" Feb 02 13:14:28 crc kubenswrapper[4721]: I0202 13:14:28.704472 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:28 crc kubenswrapper[4721]: I0202 13:14:28.706574 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-756566789b-zpsf5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.185534 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:33 crc kubenswrapper[4721]: E0202 13:14:33.186379 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="extract-content" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.186392 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="extract-content" Feb 02 13:14:33 crc kubenswrapper[4721]: E0202 13:14:33.186408 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="registry-server" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.186416 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="registry-server" Feb 02 13:14:33 crc kubenswrapper[4721]: E0202 13:14:33.186424 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="extract-utilities" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.186432 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="extract-utilities" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.186559 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0240e395-0a12-40c4-b5e6-b31168b303ab" containerName="registry-server" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.187448 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.205049 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.355201 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.355295 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p27h\" (UniqueName: \"kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.355316 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.456949 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.457338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p27h\" (UniqueName: \"kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.457361 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.457613 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.457968 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.481278 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p27h\" (UniqueName: \"kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h\") pod \"redhat-marketplace-2j8r5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.508837 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.730482 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.733414 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerStarted","Data":"c4a7acf51223f45f628cd91f41481e7aede65390c1e3de98544e03d9db406fc1"} Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.849678 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.850670 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.853688 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.856013 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.859949 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.964021 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:33 crc kubenswrapper[4721]: I0202 13:14:33.964181 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnb6p\" (UniqueName: \"kubernetes.io/projected/9648319e-f888-4996-976b-f17c6e130cde-kube-api-access-nnb6p\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.065471 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnb6p\" (UniqueName: \"kubernetes.io/projected/9648319e-f888-4996-976b-f17c6e130cde-kube-api-access-nnb6p\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.065600 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.068556 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.068602 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f287cb52ed049d2ea91d091276fa08fb761d43203bac285e5deb1efbd9b1dabf/globalmount\"" pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.092092 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9223aca-8ddb-4dbe-870b-600d3969f728\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.101556 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnb6p\" (UniqueName: \"kubernetes.io/projected/9648319e-f888-4996-976b-f17c6e130cde-kube-api-access-nnb6p\") pod \"minio\" (UID: \"9648319e-f888-4996-976b-f17c6e130cde\") " pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.197312 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.391521 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.752345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"9648319e-f888-4996-976b-f17c6e130cde","Type":"ContainerStarted","Data":"debbb5bcb668fcd076d496652b6eed619f4a71ae606d31608913d3ffbb8fc23f"} Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.754521 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerID="257023ea900b686319d684f93bf86174efc5e0727f68b5296e819f6673afb9f7" exitCode=0 Feb 02 13:14:34 crc kubenswrapper[4721]: I0202 13:14:34.754561 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerDied","Data":"257023ea900b686319d684f93bf86174efc5e0727f68b5296e819f6673afb9f7"} Feb 02 13:14:37 crc kubenswrapper[4721]: I0202 13:14:37.773349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"9648319e-f888-4996-976b-f17c6e130cde","Type":"ContainerStarted","Data":"3565b34d86d02a3d4322f46c7a25c2ed69b11c2d2182abf9f8760caf8949512d"} Feb 02 13:14:37 crc kubenswrapper[4721]: I0202 13:14:37.775335 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerID="e28c5c9633828095d9e468bec9d5dca15024d1f9adc02fe17ec54b56e17309d6" exitCode=0 Feb 02 13:14:37 crc kubenswrapper[4721]: I0202 13:14:37.775360 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerDied","Data":"e28c5c9633828095d9e468bec9d5dca15024d1f9adc02fe17ec54b56e17309d6"} Feb 02 13:14:37 crc kubenswrapper[4721]: I0202 13:14:37.791738 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.995107217 podStartE2EDuration="6.791721204s" podCreationTimestamp="2026-02-02 13:14:31 +0000 UTC" firstStartedPulling="2026-02-02 13:14:34.397324941 +0000 UTC m=+814.699839330" lastFinishedPulling="2026-02-02 13:14:37.193938928 +0000 UTC m=+817.496453317" observedRunningTime="2026-02-02 13:14:37.786760023 +0000 UTC m=+818.089274402" watchObservedRunningTime="2026-02-02 13:14:37.791721204 +0000 UTC m=+818.094235593" Feb 02 13:14:38 crc kubenswrapper[4721]: I0202 13:14:38.784125 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerStarted","Data":"489cc6a3cbe57f3f7a1a1c8c11ef02d4ab91eb4ecc18fe6c0a2d47b0318d8936"} Feb 02 13:14:38 crc kubenswrapper[4721]: I0202 13:14:38.806007 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2j8r5" podStartSLOduration=2.380270372 podStartE2EDuration="5.805987721s" podCreationTimestamp="2026-02-02 13:14:33 +0000 UTC" firstStartedPulling="2026-02-02 13:14:34.756703396 +0000 UTC m=+815.059217825" lastFinishedPulling="2026-02-02 13:14:38.182420785 +0000 UTC m=+818.484935174" observedRunningTime="2026-02-02 13:14:38.802457259 +0000 UTC m=+819.104971658" watchObservedRunningTime="2026-02-02 13:14:38.805987721 +0000 UTC m=+819.108502120" Feb 02 13:14:43 crc kubenswrapper[4721]: I0202 13:14:43.509720 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:43 crc kubenswrapper[4721]: I0202 13:14:43.510277 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:43 crc kubenswrapper[4721]: I0202 13:14:43.551786 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:43 crc kubenswrapper[4721]: I0202 13:14:43.862808 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:43 crc kubenswrapper[4721]: I0202 13:14:43.905609 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.118655 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.119944 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.123835 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.123919 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.128604 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.128777 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-qw8n7" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.134148 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.134196 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.222026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.222096 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.222121 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.222158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75dm\" (UniqueName: \"kubernetes.io/projected/7a392a7d-824d-420d-bf0d-66ca95134ea6-kube-api-access-z75dm\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.222176 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-config\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.281943 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-mnp7c"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.282937 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.286360 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.286383 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.286833 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.310417 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-mnp7c"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.323926 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.323991 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.324020 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.324089 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-config\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.324115 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75dm\" (UniqueName: \"kubernetes.io/projected/7a392a7d-824d-420d-bf0d-66ca95134ea6-kube-api-access-z75dm\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.335279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-config\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.335992 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.338953 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.368354 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7a392a7d-824d-420d-bf0d-66ca95134ea6-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.379452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75dm\" (UniqueName: \"kubernetes.io/projected/7a392a7d-824d-420d-bf0d-66ca95134ea6-kube-api-access-z75dm\") pod \"logging-loki-distributor-5f678c8dd6-tb6gs\" (UID: \"7a392a7d-824d-420d-bf0d-66ca95134ea6\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425273 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-s3\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425659 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425707 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425820 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcff\" (UniqueName: \"kubernetes.io/projected/98490098-f31f-4ee3-9f15-ee37b8740035-kube-api-access-rdcff\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.425873 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-config\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.430469 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-xs62z"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.434906 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.441746 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-xs62z"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.442026 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.443618 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.443813 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527477 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527513 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527551 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527575 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdmxq\" (UniqueName: \"kubernetes.io/projected/93d83a8b-3334-43f3-b417-58a7fbd7282c-kube-api-access-kdmxq\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527670 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-config\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdcff\" (UniqueName: \"kubernetes.io/projected/98490098-f31f-4ee3-9f15-ee37b8740035-kube-api-access-rdcff\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527823 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-config\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527898 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.527950 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-s3\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.531684 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-config\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.536129 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-p62nr"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.537464 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.537568 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.538042 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-lsthj"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.539432 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.540856 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.546840 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.547389 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.547637 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.547783 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.547938 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-mlm2x" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.548081 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.548371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/98490098-f31f-4ee3-9f15-ee37b8740035-logging-loki-s3\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.548997 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.561175 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdcff\" (UniqueName: \"kubernetes.io/projected/98490098-f31f-4ee3-9f15-ee37b8740035-kube-api-access-rdcff\") pod \"logging-loki-querier-76788598db-mnp7c\" (UID: \"98490098-f31f-4ee3-9f15-ee37b8740035\") " pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.578359 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-p62nr"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.584799 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-lsthj"] Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.603492 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629767 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tenants\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629806 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhh4\" (UniqueName: \"kubernetes.io/projected/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-kube-api-access-7hhh4\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629830 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629849 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629871 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-config\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629925 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629948 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629964 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.629987 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj56x\" (UniqueName: \"kubernetes.io/projected/e0a2094f-7b9c-426c-b7ea-6a175be407f1-kube-api-access-sj56x\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630027 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630043 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-rbac\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630090 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630147 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630168 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-rbac\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630185 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tenants\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630207 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdmxq\" (UniqueName: \"kubernetes.io/projected/93d83a8b-3334-43f3-b417-58a7fbd7282c-kube-api-access-kdmxq\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630223 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.630241 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.631633 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-config\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.640263 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.644381 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.645056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d83a8b-3334-43f3-b417-58a7fbd7282c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.662446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdmxq\" (UniqueName: \"kubernetes.io/projected/93d83a8b-3334-43f3-b417-58a7fbd7282c-kube-api-access-kdmxq\") pod \"logging-loki-query-frontend-69d9546745-xs62z\" (UID: \"93d83a8b-3334-43f3-b417-58a7fbd7282c\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732036 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732418 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732533 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj56x\" (UniqueName: \"kubernetes.io/projected/e0a2094f-7b9c-426c-b7ea-6a175be407f1-kube-api-access-sj56x\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732556 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-rbac\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732581 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732612 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732636 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-rbac\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732655 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tenants\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732677 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732693 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732725 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tenants\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhh4\" (UniqueName: \"kubernetes.io/projected/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-kube-api-access-7hhh4\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732757 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.732773 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: E0202 13:14:44.732894 4721 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 02 13:14:44 crc kubenswrapper[4721]: E0202 13:14:44.732945 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret podName:e0a2094f-7b9c-426c-b7ea-6a175be407f1 nodeName:}" failed. No retries permitted until 2026-02-02 13:14:45.232926421 +0000 UTC m=+825.535440810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret") pod "logging-loki-gateway-5f86bf5685-p62nr" (UID: "e0a2094f-7b9c-426c-b7ea-6a175be407f1") : secret "logging-loki-gateway-http" not found Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.733090 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: E0202 13:14:44.734096 4721 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 02 13:14:44 crc kubenswrapper[4721]: E0202 13:14:44.734152 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret podName:6bbaf0c4-9bfc-4cf9-b238-4f494e492243 nodeName:}" failed. No retries permitted until 2026-02-02 13:14:45.234132023 +0000 UTC m=+825.536646472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret") pod "logging-loki-gateway-5f86bf5685-lsthj" (UID: "6bbaf0c4-9bfc-4cf9-b238-4f494e492243") : secret "logging-loki-gateway-http" not found Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.734248 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-rbac\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.734382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.734863 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.734963 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-lokistack-gateway\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.735049 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-rbac\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.735604 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.736630 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.736859 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.740304 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.742512 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tenants\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.752998 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhh4\" (UniqueName: \"kubernetes.io/projected/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-kube-api-access-7hhh4\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.753771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj56x\" (UniqueName: \"kubernetes.io/projected/e0a2094f-7b9c-426c-b7ea-6a175be407f1-kube-api-access-sj56x\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.754703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tenants\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.760784 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.764696 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.764731 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:14:44 crc kubenswrapper[4721]: I0202 13:14:44.982471 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.072855 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-mnp7c"] Feb 02 13:14:45 crc kubenswrapper[4721]: W0202 13:14:45.078383 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98490098_f31f_4ee3_9f15_ee37b8740035.slice/crio-fb8e6e345bdefed548652ef504a4b3699207acbcab99af43a7672fc56787ee01 WatchSource:0}: Error finding container fb8e6e345bdefed548652ef504a4b3699207acbcab99af43a7672fc56787ee01: Status 404 returned error can't find the container with id fb8e6e345bdefed548652ef504a4b3699207acbcab99af43a7672fc56787ee01 Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.209821 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-xs62z"] Feb 02 13:14:45 crc kubenswrapper[4721]: W0202 13:14:45.211763 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d83a8b_3334_43f3_b417_58a7fbd7282c.slice/crio-9223b60892b923767e4ceb8693db69ab690eda2f495ec0aee3e1d718b91f663d WatchSource:0}: Error finding container 9223b60892b923767e4ceb8693db69ab690eda2f495ec0aee3e1d718b91f663d: Status 404 returned error can't find the container with id 9223b60892b923767e4ceb8693db69ab690eda2f495ec0aee3e1d718b91f663d Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.240519 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.240602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.243952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6bbaf0c4-9bfc-4cf9-b238-4f494e492243-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-lsthj\" (UID: \"6bbaf0c4-9bfc-4cf9-b238-4f494e492243\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.244448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e0a2094f-7b9c-426c-b7ea-6a175be407f1-tls-secret\") pod \"logging-loki-gateway-5f86bf5685-p62nr\" (UID: \"e0a2094f-7b9c-426c-b7ea-6a175be407f1\") " pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.269924 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.270928 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.273027 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.273411 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.286209 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.342800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-06ecef5f-e148-418a-a927-77f97664b9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06ecef5f-e148-418a-a927-77f97664b9ce\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.360816 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.361870 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.364292 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.364682 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.367928 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.413215 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.414245 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.416184 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.422718 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.422886 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.443865 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-config\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.443923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.443948 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.444042 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzn4\" (UniqueName: \"kubernetes.io/projected/b3605888-b0c3-4049-8f6a-cd4f380b91a7-kube-api-access-xvzn4\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.444107 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.444338 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.444385 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-06ecef5f-e148-418a-a927-77f97664b9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06ecef5f-e148-418a-a927-77f97664b9ce\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.444426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.447647 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.447685 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-06ecef5f-e148-418a-a927-77f97664b9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06ecef5f-e148-418a-a927-77f97664b9ce\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c9eb1d3a011e7af2d0e565897dbc25965e43c1a4bea15041b5d2658356336c80/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.479608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-06ecef5f-e148-418a-a927-77f97664b9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06ecef5f-e148-418a-a927-77f97664b9ce\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.480985 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.492546 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546572 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546633 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546658 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546684 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546725 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-config\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546804 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546831 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546863 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546888 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-config\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546928 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lzt\" (UniqueName: \"kubernetes.io/projected/2cb9902a-5fe1-42ee-a659-eebccc3aec15-kube-api-access-68lzt\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546953 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.546974 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547027 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547061 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547117 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed927557-1ad3-416a-885d-33009b830d19\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed927557-1ad3-416a-885d-33009b830d19\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547164 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzn4\" (UniqueName: \"kubernetes.io/projected/b3605888-b0c3-4049-8f6a-cd4f380b91a7-kube-api-access-xvzn4\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547188 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgtx\" (UniqueName: \"kubernetes.io/projected/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-kube-api-access-9xgtx\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.547215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.550243 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.550287 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f21ff46e0cccf1394ebfcd6d08b5381237263e0acb5f3499005b577002275034/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.550520 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-config\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.550929 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.551009 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.552757 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.555753 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b3605888-b0c3-4049-8f6a-cd4f380b91a7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.567355 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzn4\" (UniqueName: \"kubernetes.io/projected/b3605888-b0c3-4049-8f6a-cd4f380b91a7-kube-api-access-xvzn4\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.585800 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e8486e0-2725-4562-874a-a8a36f02e65a\") pod \"logging-loki-ingester-0\" (UID: \"b3605888-b0c3-4049-8f6a-cd4f380b91a7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649218 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ed927557-1ad3-416a-885d-33009b830d19\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed927557-1ad3-416a-885d-33009b830d19\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649331 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgtx\" (UniqueName: \"kubernetes.io/projected/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-kube-api-access-9xgtx\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649729 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649760 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649892 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649924 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-config\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.649988 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lzt\" (UniqueName: \"kubernetes.io/projected/2cb9902a-5fe1-42ee-a659-eebccc3aec15-kube-api-access-68lzt\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.651595 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.651794 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9902a-5fe1-42ee-a659-eebccc3aec15-config\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652099 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652505 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652568 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652600 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.652755 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.656455 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.658559 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.658592 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ed927557-1ad3-416a-885d-33009b830d19\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed927557-1ad3-416a-885d-33009b830d19\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/575151a99aa8257e6030b1ba8770c9171521934c8cf83d5c7cd7603da5ef5b63/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.664459 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.665631 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.665758 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.666566 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.667367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/2cb9902a-5fe1-42ee-a659-eebccc3aec15-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.668887 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgtx\" (UniqueName: \"kubernetes.io/projected/05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e-kube-api-access-9xgtx\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.670643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lzt\" (UniqueName: \"kubernetes.io/projected/2cb9902a-5fe1-42ee-a659-eebccc3aec15-kube-api-access-68lzt\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.670806 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.670832 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9bd2a3d38f0af1e1aa0b202c6a67c0c676ffcc4415aa72f91ba06ab5d848d79c/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.713716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ed927557-1ad3-416a-885d-33009b830d19\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed927557-1ad3-416a-885d-33009b830d19\") pod \"logging-loki-compactor-0\" (UID: \"2cb9902a-5fe1-42ee-a659-eebccc3aec15\") " pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.782565 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a406acb1-d9b3-46b5-9297-bf84583d192a\") pod \"logging-loki-index-gateway-0\" (UID: \"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.816230 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-lsthj"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.837262 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" event={"ID":"98490098-f31f-4ee3-9f15-ee37b8740035","Type":"ContainerStarted","Data":"fb8e6e345bdefed548652ef504a4b3699207acbcab99af43a7672fc56787ee01"} Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.838808 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" event={"ID":"6bbaf0c4-9bfc-4cf9-b238-4f494e492243","Type":"ContainerStarted","Data":"4711b7eb4a0da7212314f1f32d2fe079233728253146fbb3397169eec06f789c"} Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.844058 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" event={"ID":"93d83a8b-3334-43f3-b417-58a7fbd7282c","Type":"ContainerStarted","Data":"9223b60892b923767e4ceb8693db69ab690eda2f495ec0aee3e1d718b91f663d"} Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.868129 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" event={"ID":"7a392a7d-824d-420d-bf0d-66ca95134ea6","Type":"ContainerStarted","Data":"ca0113198309a389da25790ee60a87a610d64b679357d9ae5ad4f1e3342fca45"} Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.868251 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2j8r5" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="registry-server" containerID="cri-o://489cc6a3cbe57f3f7a1a1c8c11ef02d4ab91eb4ecc18fe6c0a2d47b0318d8936" gracePeriod=2 Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.884230 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f86bf5685-p62nr"] Feb 02 13:14:45 crc kubenswrapper[4721]: I0202 13:14:45.884481 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:45 crc kubenswrapper[4721]: W0202 13:14:45.895565 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a2094f_7b9c_426c_b7ea_6a175be407f1.slice/crio-91aeae4ccd700b2348caa4d837157cb32f19a714b3ae997f4aac1d12ed759a3a WatchSource:0}: Error finding container 91aeae4ccd700b2348caa4d837157cb32f19a714b3ae997f4aac1d12ed759a3a: Status 404 returned error can't find the container with id 91aeae4ccd700b2348caa4d837157cb32f19a714b3ae997f4aac1d12ed759a3a Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.014677 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.038234 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.298457 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 02 13:14:46 crc kubenswrapper[4721]: W0202 13:14:46.311102 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3605888_b0c3_4049_8f6a_cd4f380b91a7.slice/crio-6084a3489789763a95ef4ea2a853ec9afb5812a4c9323b9c401d41a3bc378eb5 WatchSource:0}: Error finding container 6084a3489789763a95ef4ea2a853ec9afb5812a4c9323b9c401d41a3bc378eb5: Status 404 returned error can't find the container with id 6084a3489789763a95ef4ea2a853ec9afb5812a4c9323b9c401d41a3bc378eb5 Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.508587 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 02 13:14:46 crc kubenswrapper[4721]: W0202 13:14:46.513693 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f7eb9f_7ce9_4d66_b8e1_cc9eb0c1949e.slice/crio-5bf6122aacafc0b6388120a083d8b2971b4a41f1758cdb149ab8f88b8da297bd WatchSource:0}: Error finding container 5bf6122aacafc0b6388120a083d8b2971b4a41f1758cdb149ab8f88b8da297bd: Status 404 returned error can't find the container with id 5bf6122aacafc0b6388120a083d8b2971b4a41f1758cdb149ab8f88b8da297bd Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.514236 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.875564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b3605888-b0c3-4049-8f6a-cd4f380b91a7","Type":"ContainerStarted","Data":"6084a3489789763a95ef4ea2a853ec9afb5812a4c9323b9c401d41a3bc378eb5"} Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.876899 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" event={"ID":"e0a2094f-7b9c-426c-b7ea-6a175be407f1","Type":"ContainerStarted","Data":"91aeae4ccd700b2348caa4d837157cb32f19a714b3ae997f4aac1d12ed759a3a"} Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.878343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e","Type":"ContainerStarted","Data":"5bf6122aacafc0b6388120a083d8b2971b4a41f1758cdb149ab8f88b8da297bd"} Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.882001 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerID="489cc6a3cbe57f3f7a1a1c8c11ef02d4ab91eb4ecc18fe6c0a2d47b0318d8936" exitCode=0 Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.882052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerDied","Data":"489cc6a3cbe57f3f7a1a1c8c11ef02d4ab91eb4ecc18fe6c0a2d47b0318d8936"} Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.882970 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2cb9902a-5fe1-42ee-a659-eebccc3aec15","Type":"ContainerStarted","Data":"b05028a727fbcccbebb99f52d6adbbba29cde94cac69a952fa8f40449a35b940"} Feb 02 13:14:46 crc kubenswrapper[4721]: I0202 13:14:46.992106 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.187996 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content\") pod \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.188111 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities\") pod \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.188175 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p27h\" (UniqueName: \"kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h\") pod \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\" (UID: \"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5\") " Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.189432 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities" (OuterVolumeSpecName: "utilities") pod "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" (UID: "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.205046 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h" (OuterVolumeSpecName: "kube-api-access-4p27h") pod "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" (UID: "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5"). InnerVolumeSpecName "kube-api-access-4p27h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.217966 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" (UID: "2c09ae42-5e47-43fd-bcdb-b843f0a80cf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.290092 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.290126 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.290136 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p27h\" (UniqueName: \"kubernetes.io/projected/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5-kube-api-access-4p27h\") on node \"crc\" DevicePath \"\"" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.894156 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2j8r5" event={"ID":"2c09ae42-5e47-43fd-bcdb-b843f0a80cf5","Type":"ContainerDied","Data":"c4a7acf51223f45f628cd91f41481e7aede65390c1e3de98544e03d9db406fc1"} Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.894215 4721 scope.go:117] "RemoveContainer" containerID="489cc6a3cbe57f3f7a1a1c8c11ef02d4ab91eb4ecc18fe6c0a2d47b0318d8936" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.894352 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2j8r5" Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.927866 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:47 crc kubenswrapper[4721]: I0202 13:14:47.937350 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2j8r5"] Feb 02 13:14:48 crc kubenswrapper[4721]: I0202 13:14:48.418995 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" path="/var/lib/kubelet/pods/2c09ae42-5e47-43fd-bcdb-b843f0a80cf5/volumes" Feb 02 13:14:48 crc kubenswrapper[4721]: I0202 13:14:48.610457 4721 scope.go:117] "RemoveContainer" containerID="e28c5c9633828095d9e468bec9d5dca15024d1f9adc02fe17ec54b56e17309d6" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.517448 4721 scope.go:117] "RemoveContainer" containerID="257023ea900b686319d684f93bf86174efc5e0727f68b5296e819f6673afb9f7" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.915698 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" event={"ID":"98490098-f31f-4ee3-9f15-ee37b8740035","Type":"ContainerStarted","Data":"4086d3da4ad2c2d21614ee02f6784e740d57e601d320d13ca9a8bea09c5f5a57"} Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.916314 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.920617 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" event={"ID":"6bbaf0c4-9bfc-4cf9-b238-4f494e492243","Type":"ContainerStarted","Data":"cea591203469d67d8ac27cf0aea109b32f50ca2717610bec7ec2894317741a5a"} Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.935726 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" podStartSLOduration=1.403821258 podStartE2EDuration="5.935710739s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:45.086236746 +0000 UTC m=+825.388751135" lastFinishedPulling="2026-02-02 13:14:49.618126227 +0000 UTC m=+829.920640616" observedRunningTime="2026-02-02 13:14:49.932663418 +0000 UTC m=+830.235177827" watchObservedRunningTime="2026-02-02 13:14:49.935710739 +0000 UTC m=+830.238225128" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.948162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b3605888-b0c3-4049-8f6a-cd4f380b91a7","Type":"ContainerStarted","Data":"3ea5bbcf7382aaf12d7111cd0592cf9fa3971e14a3ad042db238a1b506c5d58b"} Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.948248 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.957954 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" event={"ID":"93d83a8b-3334-43f3-b417-58a7fbd7282c","Type":"ContainerStarted","Data":"900c298808bd159f79b6e9efb1aab9db8bb0cbfd9bf23be67e6fefdae5083d1e"} Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.958690 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:14:49 crc kubenswrapper[4721]: I0202 13:14:49.974056 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.67938469 podStartE2EDuration="5.97403792s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:46.313597776 +0000 UTC m=+826.616112165" lastFinishedPulling="2026-02-02 13:14:49.608251006 +0000 UTC m=+829.910765395" observedRunningTime="2026-02-02 13:14:49.970030735 +0000 UTC m=+830.272545124" watchObservedRunningTime="2026-02-02 13:14:49.97403792 +0000 UTC m=+830.276552309" Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.000580 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" podStartSLOduration=1.64307143 podStartE2EDuration="6.00056389s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:45.213890124 +0000 UTC m=+825.516404513" lastFinishedPulling="2026-02-02 13:14:49.571382584 +0000 UTC m=+829.873896973" observedRunningTime="2026-02-02 13:14:49.999179543 +0000 UTC m=+830.301693932" watchObservedRunningTime="2026-02-02 13:14:50.00056389 +0000 UTC m=+830.303078279" Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.982628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" event={"ID":"e0a2094f-7b9c-426c-b7ea-6a175be407f1","Type":"ContainerStarted","Data":"52c56b9b4aad7d9613498e92d74085c1dd666eecd51d58b1ef0594a883c327fd"} Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.984563 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e","Type":"ContainerStarted","Data":"06d760a66be361fb8a9a24ac09b04fea7ab602f9b48628062068683420d1b72d"} Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.984647 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.985785 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" event={"ID":"7a392a7d-824d-420d-bf0d-66ca95134ea6","Type":"ContainerStarted","Data":"6b1866add1234c812d9728a8f46d75b7307fee4f54c3173bc8d783d1d450e86e"} Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.985886 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:14:50 crc kubenswrapper[4721]: I0202 13:14:50.987786 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"2cb9902a-5fe1-42ee-a659-eebccc3aec15","Type":"ContainerStarted","Data":"ebdb48cc8ece068cb66b383eb389088a29273b254af0e3c7749c23b159db4cce"} Feb 02 13:14:51 crc kubenswrapper[4721]: I0202 13:14:51.023049 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.923040222 podStartE2EDuration="7.023027805s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:46.518944826 +0000 UTC m=+826.821459215" lastFinishedPulling="2026-02-02 13:14:49.618932409 +0000 UTC m=+829.921446798" observedRunningTime="2026-02-02 13:14:51.003263273 +0000 UTC m=+831.305777662" watchObservedRunningTime="2026-02-02 13:14:51.023027805 +0000 UTC m=+831.325542204" Feb 02 13:14:51 crc kubenswrapper[4721]: I0202 13:14:51.023196 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.885224034 podStartE2EDuration="7.023190179s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:46.51228849 +0000 UTC m=+826.814802879" lastFinishedPulling="2026-02-02 13:14:49.650254625 +0000 UTC m=+829.952769024" observedRunningTime="2026-02-02 13:14:51.02060617 +0000 UTC m=+831.323120569" watchObservedRunningTime="2026-02-02 13:14:51.023190179 +0000 UTC m=+831.325704588" Feb 02 13:14:51 crc kubenswrapper[4721]: I0202 13:14:51.039362 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" podStartSLOduration=2.371687269 podStartE2EDuration="7.039344405s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:44.98601068 +0000 UTC m=+825.288525069" lastFinishedPulling="2026-02-02 13:14:49.653667816 +0000 UTC m=+829.956182205" observedRunningTime="2026-02-02 13:14:51.037979679 +0000 UTC m=+831.340494088" watchObservedRunningTime="2026-02-02 13:14:51.039344405 +0000 UTC m=+831.341858794" Feb 02 13:14:51 crc kubenswrapper[4721]: I0202 13:14:51.993741 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.019013 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" event={"ID":"6bbaf0c4-9bfc-4cf9-b238-4f494e492243","Type":"ContainerStarted","Data":"69ac97b235683e1115e495b613194b431973148a4f0ab3c4a1f9129c8c0097aa"} Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.019598 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.019774 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.022394 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" event={"ID":"e0a2094f-7b9c-426c-b7ea-6a175be407f1","Type":"ContainerStarted","Data":"f3be0acc575a5a61840ab054e14736571b1a91aa8208b670d6c569c7f42d8749"} Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.023023 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.023278 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.030710 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.033753 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.034202 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.035697 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.071242 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f86bf5685-lsthj" podStartSLOduration=2.681433814 podStartE2EDuration="11.071210401s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:45.835618842 +0000 UTC m=+826.138133231" lastFinishedPulling="2026-02-02 13:14:54.225395429 +0000 UTC m=+834.527909818" observedRunningTime="2026-02-02 13:14:55.052139497 +0000 UTC m=+835.354653906" watchObservedRunningTime="2026-02-02 13:14:55.071210401 +0000 UTC m=+835.373724840" Feb 02 13:14:55 crc kubenswrapper[4721]: I0202 13:14:55.125090 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f86bf5685-p62nr" podStartSLOduration=2.794109037 podStartE2EDuration="11.125045301s" podCreationTimestamp="2026-02-02 13:14:44 +0000 UTC" firstStartedPulling="2026-02-02 13:14:45.898283426 +0000 UTC m=+826.200797815" lastFinishedPulling="2026-02-02 13:14:54.22921969 +0000 UTC m=+834.531734079" observedRunningTime="2026-02-02 13:14:55.117550994 +0000 UTC m=+835.420065403" watchObservedRunningTime="2026-02-02 13:14:55.125045301 +0000 UTC m=+835.427559720" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.204960 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn"] Feb 02 13:15:00 crc kubenswrapper[4721]: E0202 13:15:00.205660 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="extract-content" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.205679 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="extract-content" Feb 02 13:15:00 crc kubenswrapper[4721]: E0202 13:15:00.205709 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="extract-utilities" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.205717 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="extract-utilities" Feb 02 13:15:00 crc kubenswrapper[4721]: E0202 13:15:00.205738 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="registry-server" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.205745 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="registry-server" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.205986 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c09ae42-5e47-43fd-bcdb-b843f0a80cf5" containerName="registry-server" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.206583 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.210562 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.212582 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.222079 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn"] Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.289520 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.289633 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.289684 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2fq\" (UniqueName: \"kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.390491 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.390548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2fq\" (UniqueName: \"kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.390637 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.392536 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.397393 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.401740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.413045 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2fq\" (UniqueName: \"kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq\") pod \"collect-profiles-29500635-qjbdn\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.525810 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.535123 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:00 crc kubenswrapper[4721]: I0202 13:15:00.751862 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn"] Feb 02 13:15:01 crc kubenswrapper[4721]: I0202 13:15:01.065496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" event={"ID":"13c984cb-b059-4e3f-86f2-8abca8e6942e","Type":"ContainerStarted","Data":"dbd339a45a88197a4052721f67969cee0f84e4e520076f5a19a1a3e14ab9298f"} Feb 02 13:15:01 crc kubenswrapper[4721]: I0202 13:15:01.065540 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" event={"ID":"13c984cb-b059-4e3f-86f2-8abca8e6942e","Type":"ContainerStarted","Data":"1e4acb7e64b443ef71b0b41d513ba87644d053deb453fd5009c15092f6438056"} Feb 02 13:15:01 crc kubenswrapper[4721]: I0202 13:15:01.080547 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" podStartSLOduration=1.080531614 podStartE2EDuration="1.080531614s" podCreationTimestamp="2026-02-02 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:15:01.077224796 +0000 UTC m=+841.379739185" watchObservedRunningTime="2026-02-02 13:15:01.080531614 +0000 UTC m=+841.383046003" Feb 02 13:15:02 crc kubenswrapper[4721]: I0202 13:15:02.074176 4721 generic.go:334] "Generic (PLEG): container finished" podID="13c984cb-b059-4e3f-86f2-8abca8e6942e" containerID="dbd339a45a88197a4052721f67969cee0f84e4e520076f5a19a1a3e14ab9298f" exitCode=0 Feb 02 13:15:02 crc kubenswrapper[4721]: I0202 13:15:02.074294 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" event={"ID":"13c984cb-b059-4e3f-86f2-8abca8e6942e","Type":"ContainerDied","Data":"dbd339a45a88197a4052721f67969cee0f84e4e520076f5a19a1a3e14ab9298f"} Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.333669 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.434818 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume\") pod \"13c984cb-b059-4e3f-86f2-8abca8e6942e\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.434968 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume\") pod \"13c984cb-b059-4e3f-86f2-8abca8e6942e\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.435038 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x2fq\" (UniqueName: \"kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq\") pod \"13c984cb-b059-4e3f-86f2-8abca8e6942e\" (UID: \"13c984cb-b059-4e3f-86f2-8abca8e6942e\") " Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.436008 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume" (OuterVolumeSpecName: "config-volume") pod "13c984cb-b059-4e3f-86f2-8abca8e6942e" (UID: "13c984cb-b059-4e3f-86f2-8abca8e6942e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.440054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq" (OuterVolumeSpecName: "kube-api-access-4x2fq") pod "13c984cb-b059-4e3f-86f2-8abca8e6942e" (UID: "13c984cb-b059-4e3f-86f2-8abca8e6942e"). InnerVolumeSpecName "kube-api-access-4x2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.440734 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13c984cb-b059-4e3f-86f2-8abca8e6942e" (UID: "13c984cb-b059-4e3f-86f2-8abca8e6942e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.536703 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13c984cb-b059-4e3f-86f2-8abca8e6942e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.536751 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x2fq\" (UniqueName: \"kubernetes.io/projected/13c984cb-b059-4e3f-86f2-8abca8e6942e-kube-api-access-4x2fq\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:03 crc kubenswrapper[4721]: I0202 13:15:03.536765 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13c984cb-b059-4e3f-86f2-8abca8e6942e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.089679 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" event={"ID":"13c984cb-b059-4e3f-86f2-8abca8e6942e","Type":"ContainerDied","Data":"1e4acb7e64b443ef71b0b41d513ba87644d053deb453fd5009c15092f6438056"} Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.089735 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4acb7e64b443ef71b0b41d513ba87644d053deb453fd5009c15092f6438056" Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.089743 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn" Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.452102 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-tb6gs" Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.609349 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-mnp7c" Feb 02 13:15:04 crc kubenswrapper[4721]: I0202 13:15:04.798153 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-xs62z" Feb 02 13:15:05 crc kubenswrapper[4721]: I0202 13:15:05.890316 4721 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 02 13:15:05 crc kubenswrapper[4721]: I0202 13:15:05.890658 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b3605888-b0c3-4049-8f6a-cd4f380b91a7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 13:15:06 crc kubenswrapper[4721]: I0202 13:15:06.022578 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 02 13:15:06 crc kubenswrapper[4721]: I0202 13:15:06.050921 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 02 13:15:14 crc kubenswrapper[4721]: I0202 13:15:14.763637 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:15:14 crc kubenswrapper[4721]: I0202 13:15:14.764275 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:15:14 crc kubenswrapper[4721]: I0202 13:15:14.764345 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:15:14 crc kubenswrapper[4721]: I0202 13:15:14.765358 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:15:14 crc kubenswrapper[4721]: I0202 13:15:14.765505 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc" gracePeriod=600 Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.188642 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc" exitCode=0 Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.188715 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc"} Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.188987 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66"} Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.189017 4721 scope.go:117] "RemoveContainer" containerID="014ac4f70cadb2e5ed3977a4b883172b9e9190b5cdf25295500702abdd38ede7" Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.892274 4721 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 02 13:15:15 crc kubenswrapper[4721]: I0202 13:15:15.892635 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b3605888-b0c3-4049-8f6a-cd4f380b91a7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 13:15:25 crc kubenswrapper[4721]: I0202 13:15:25.889773 4721 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 02 13:15:25 crc kubenswrapper[4721]: I0202 13:15:25.890656 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b3605888-b0c3-4049-8f6a-cd4f380b91a7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.865125 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgnrr"] Feb 02 13:15:31 crc kubenswrapper[4721]: E0202 13:15:31.866170 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c984cb-b059-4e3f-86f2-8abca8e6942e" containerName="collect-profiles" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.866192 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c984cb-b059-4e3f-86f2-8abca8e6942e" containerName="collect-profiles" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.866431 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c984cb-b059-4e3f-86f2-8abca8e6942e" containerName="collect-profiles" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.868221 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.876587 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgnrr"] Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.984915 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlcrv\" (UniqueName: \"kubernetes.io/projected/ff82afec-f54e-4b47-8399-fd27b44558d3-kube-api-access-rlcrv\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.985257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-catalog-content\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:31 crc kubenswrapper[4721]: I0202 13:15:31.985329 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-utilities\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.086656 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-catalog-content\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.086944 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-utilities\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.087107 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlcrv\" (UniqueName: \"kubernetes.io/projected/ff82afec-f54e-4b47-8399-fd27b44558d3-kube-api-access-rlcrv\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.087241 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-catalog-content\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.087392 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff82afec-f54e-4b47-8399-fd27b44558d3-utilities\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.112014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlcrv\" (UniqueName: \"kubernetes.io/projected/ff82afec-f54e-4b47-8399-fd27b44558d3-kube-api-access-rlcrv\") pod \"certified-operators-tgnrr\" (UID: \"ff82afec-f54e-4b47-8399-fd27b44558d3\") " pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.187499 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:32 crc kubenswrapper[4721]: I0202 13:15:32.699746 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgnrr"] Feb 02 13:15:33 crc kubenswrapper[4721]: I0202 13:15:33.302838 4721 generic.go:334] "Generic (PLEG): container finished" podID="ff82afec-f54e-4b47-8399-fd27b44558d3" containerID="e919ad3046080582c210b132a34bd292a9604a038e68ce187c8ada2cb5b4e644" exitCode=0 Feb 02 13:15:33 crc kubenswrapper[4721]: I0202 13:15:33.302901 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgnrr" event={"ID":"ff82afec-f54e-4b47-8399-fd27b44558d3","Type":"ContainerDied","Data":"e919ad3046080582c210b132a34bd292a9604a038e68ce187c8ada2cb5b4e644"} Feb 02 13:15:33 crc kubenswrapper[4721]: I0202 13:15:33.302981 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgnrr" event={"ID":"ff82afec-f54e-4b47-8399-fd27b44558d3","Type":"ContainerStarted","Data":"a199694561ebeb611256354bee65dd727d981486d3a78331a7d6bc24c71ceb99"} Feb 02 13:15:35 crc kubenswrapper[4721]: I0202 13:15:35.889983 4721 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 02 13:15:35 crc kubenswrapper[4721]: I0202 13:15:35.890617 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b3605888-b0c3-4049-8f6a-cd4f380b91a7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 13:15:38 crc kubenswrapper[4721]: I0202 13:15:38.355881 4721 generic.go:334] "Generic (PLEG): container finished" podID="ff82afec-f54e-4b47-8399-fd27b44558d3" containerID="0bef98b1e102f71b642c054d20c427193cff60dddb3067aac1de670786f343f9" exitCode=0 Feb 02 13:15:38 crc kubenswrapper[4721]: I0202 13:15:38.355949 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgnrr" event={"ID":"ff82afec-f54e-4b47-8399-fd27b44558d3","Type":"ContainerDied","Data":"0bef98b1e102f71b642c054d20c427193cff60dddb3067aac1de670786f343f9"} Feb 02 13:15:39 crc kubenswrapper[4721]: I0202 13:15:39.364642 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgnrr" event={"ID":"ff82afec-f54e-4b47-8399-fd27b44558d3","Type":"ContainerStarted","Data":"c0f7d57133aed0dfb5f80491de3135629651a4698d8c515701f30eb672053a40"} Feb 02 13:15:39 crc kubenswrapper[4721]: I0202 13:15:39.386675 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgnrr" podStartSLOduration=2.735167169 podStartE2EDuration="8.386653191s" podCreationTimestamp="2026-02-02 13:15:31 +0000 UTC" firstStartedPulling="2026-02-02 13:15:33.304608444 +0000 UTC m=+873.607122833" lastFinishedPulling="2026-02-02 13:15:38.956094466 +0000 UTC m=+879.258608855" observedRunningTime="2026-02-02 13:15:39.382777118 +0000 UTC m=+879.685291517" watchObservedRunningTime="2026-02-02 13:15:39.386653191 +0000 UTC m=+879.689167590" Feb 02 13:15:42 crc kubenswrapper[4721]: I0202 13:15:42.188001 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:42 crc kubenswrapper[4721]: I0202 13:15:42.188570 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:42 crc kubenswrapper[4721]: I0202 13:15:42.244206 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:45 crc kubenswrapper[4721]: I0202 13:15:45.893372 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.229320 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgnrr" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.313854 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgnrr"] Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.362696 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.362938 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w5wlg" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="registry-server" containerID="cri-o://fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87" gracePeriod=2 Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.747655 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.811677 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities\") pod \"2db39b59-16bf-4029-b8be-4be395b09cdf\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.811859 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9j2g\" (UniqueName: \"kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g\") pod \"2db39b59-16bf-4029-b8be-4be395b09cdf\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.811937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content\") pod \"2db39b59-16bf-4029-b8be-4be395b09cdf\" (UID: \"2db39b59-16bf-4029-b8be-4be395b09cdf\") " Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.817964 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities" (OuterVolumeSpecName: "utilities") pod "2db39b59-16bf-4029-b8be-4be395b09cdf" (UID: "2db39b59-16bf-4029-b8be-4be395b09cdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.823506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g" (OuterVolumeSpecName: "kube-api-access-c9j2g") pod "2db39b59-16bf-4029-b8be-4be395b09cdf" (UID: "2db39b59-16bf-4029-b8be-4be395b09cdf"). InnerVolumeSpecName "kube-api-access-c9j2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.885025 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2db39b59-16bf-4029-b8be-4be395b09cdf" (UID: "2db39b59-16bf-4029-b8be-4be395b09cdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.914349 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.914402 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2db39b59-16bf-4029-b8be-4be395b09cdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:52 crc kubenswrapper[4721]: I0202 13:15:52.914416 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9j2g\" (UniqueName: \"kubernetes.io/projected/2db39b59-16bf-4029-b8be-4be395b09cdf-kube-api-access-c9j2g\") on node \"crc\" DevicePath \"\"" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.452812 4721 generic.go:334] "Generic (PLEG): container finished" podID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerID="fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87" exitCode=0 Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.452872 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerDied","Data":"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87"} Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.453167 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5wlg" event={"ID":"2db39b59-16bf-4029-b8be-4be395b09cdf","Type":"ContainerDied","Data":"aa53f668db48e2aa112a5cbfcc3d2601a92bffa612a49b3105e8618823c15e6b"} Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.453190 4721 scope.go:117] "RemoveContainer" containerID="fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.452886 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5wlg" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.472500 4721 scope.go:117] "RemoveContainer" containerID="6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.489633 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.505690 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w5wlg"] Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.517511 4721 scope.go:117] "RemoveContainer" containerID="cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.549946 4721 scope.go:117] "RemoveContainer" containerID="fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87" Feb 02 13:15:53 crc kubenswrapper[4721]: E0202 13:15:53.576550 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87\": container with ID starting with fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87 not found: ID does not exist" containerID="fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.576644 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87"} err="failed to get container status \"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87\": rpc error: code = NotFound desc = could not find container \"fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87\": container with ID starting with fac2c6657c2c4b2c8f188a592777004421506c7d4a56916a7a265847528a9b87 not found: ID does not exist" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.576709 4721 scope.go:117] "RemoveContainer" containerID="6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f" Feb 02 13:15:53 crc kubenswrapper[4721]: E0202 13:15:53.577289 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f\": container with ID starting with 6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f not found: ID does not exist" containerID="6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.577350 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f"} err="failed to get container status \"6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f\": rpc error: code = NotFound desc = could not find container \"6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f\": container with ID starting with 6a9b786e3b23e3aaade71c1314045316e2616b985dc63d18d872102a5103829f not found: ID does not exist" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.577384 4721 scope.go:117] "RemoveContainer" containerID="cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc" Feb 02 13:15:53 crc kubenswrapper[4721]: E0202 13:15:53.577666 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc\": container with ID starting with cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc not found: ID does not exist" containerID="cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc" Feb 02 13:15:53 crc kubenswrapper[4721]: I0202 13:15:53.577694 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc"} err="failed to get container status \"cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc\": rpc error: code = NotFound desc = could not find container \"cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc\": container with ID starting with cda228ff8b653e386af95f32890f0fe9189a306544ff456cd8d4ee6c401233dc not found: ID does not exist" Feb 02 13:15:54 crc kubenswrapper[4721]: I0202 13:15:54.417567 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" path="/var/lib/kubelet/pods/2db39b59-16bf-4029-b8be-4be395b09cdf/volumes" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.333519 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-x8xcn"] Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.334316 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="registry-server" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.334329 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="registry-server" Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.334359 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="extract-utilities" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.334365 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="extract-utilities" Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.334372 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="extract-content" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.334378 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="extract-content" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.334496 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db39b59-16bf-4029-b8be-4be395b09cdf" containerName="registry-server" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.335085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.338489 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.338488 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.339028 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.339227 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.339255 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vwh88" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.347916 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-x8xcn"] Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.348631 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.406320 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-x8xcn"] Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.406837 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-8wznf metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-8wznf metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-x8xcn" podUID="107cfbba-9034-4f35-adf0-801a876e9d52" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467694 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467732 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467770 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467793 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467839 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467859 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467887 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467910 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wznf\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.467972 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.468014 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.510881 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.521266 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569431 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569487 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569509 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569556 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569578 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569603 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569651 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569675 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wznf\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.569721 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.570606 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.571916 4721 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 02 13:16:02 crc kubenswrapper[4721]: E0202 13:16:02.572292 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics podName:107cfbba-9034-4f35-adf0-801a876e9d52 nodeName:}" failed. No retries permitted until 2026-02-02 13:16:03.072273695 +0000 UTC m=+903.374788084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics") pod "collector-x8xcn" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52") : secret "collector-metrics" not found Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.573358 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.573470 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.573996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.574046 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.578669 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.580260 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.580315 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.593465 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.594299 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wznf\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670521 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670629 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670672 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670706 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670806 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670875 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wznf\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670899 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670934 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670966 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.670994 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.671685 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config" (OuterVolumeSpecName: "config") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.671867 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir" (OuterVolumeSpecName: "datadir") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.672132 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.672299 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.672662 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.675518 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token" (OuterVolumeSpecName: "collector-token") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679575 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679612 4721 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679623 4721 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679632 4721 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/107cfbba-9034-4f35-adf0-801a876e9d52-datadir\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679642 4721 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.679650 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107cfbba-9034-4f35-adf0-801a876e9d52-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.682388 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.682423 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp" (OuterVolumeSpecName: "tmp") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.685462 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf" (OuterVolumeSpecName: "kube-api-access-8wznf") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "kube-api-access-8wznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.691850 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token" (OuterVolumeSpecName: "sa-token") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.781046 4721 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.781319 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wznf\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-kube-api-access-8wznf\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.781383 4721 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/107cfbba-9034-4f35-adf0-801a876e9d52-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:02 crc kubenswrapper[4721]: I0202 13:16:02.781441 4721 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/107cfbba-9034-4f35-adf0-801a876e9d52-tmp\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.086831 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.092576 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") pod \"collector-x8xcn\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " pod="openshift-logging/collector-x8xcn" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.188635 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") pod \"107cfbba-9034-4f35-adf0-801a876e9d52\" (UID: \"107cfbba-9034-4f35-adf0-801a876e9d52\") " Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.191465 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics" (OuterVolumeSpecName: "metrics") pod "107cfbba-9034-4f35-adf0-801a876e9d52" (UID: "107cfbba-9034-4f35-adf0-801a876e9d52"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.290674 4721 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/107cfbba-9034-4f35-adf0-801a876e9d52-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.517154 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x8xcn" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.570435 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-x8xcn"] Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.581246 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-x8xcn"] Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.585745 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ls7f7"] Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.586720 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.588840 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.592388 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.592582 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.592702 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-vwh88" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.592890 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.601547 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.609865 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ls7f7"] Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.699941 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64dr\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-kube-api-access-n64dr\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700022 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/749232df-9bfe-43cb-a716-6eadd2cbc290-datadir\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700049 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-sa-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700201 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700275 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-entrypoint\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700301 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-metrics\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700321 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-syslog-receiver\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700486 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-trusted-ca\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700555 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config-openshift-service-cacrt\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.700739 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/749232df-9bfe-43cb-a716-6eadd2cbc290-tmp\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/749232df-9bfe-43cb-a716-6eadd2cbc290-tmp\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802437 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64dr\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-kube-api-access-n64dr\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/749232df-9bfe-43cb-a716-6eadd2cbc290-datadir\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-sa-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/749232df-9bfe-43cb-a716-6eadd2cbc290-datadir\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802964 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-entrypoint\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.802987 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-metrics\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.803004 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-syslog-receiver\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.803881 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.803833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.803952 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-trusted-ca\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.804131 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-entrypoint\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.803975 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config-openshift-service-cacrt\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.805107 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-config-openshift-service-cacrt\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.806508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749232df-9bfe-43cb-a716-6eadd2cbc290-trusted-ca\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.806670 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/749232df-9bfe-43cb-a716-6eadd2cbc290-tmp\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.806819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-syslog-receiver\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.807339 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-metrics\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.819371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64dr\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-kube-api-access-n64dr\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.820116 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/749232df-9bfe-43cb-a716-6eadd2cbc290-collector-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.821723 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/749232df-9bfe-43cb-a716-6eadd2cbc290-sa-token\") pod \"collector-ls7f7\" (UID: \"749232df-9bfe-43cb-a716-6eadd2cbc290\") " pod="openshift-logging/collector-ls7f7" Feb 02 13:16:03 crc kubenswrapper[4721]: I0202 13:16:03.932517 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ls7f7" Feb 02 13:16:04 crc kubenswrapper[4721]: I0202 13:16:04.349295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ls7f7"] Feb 02 13:16:04 crc kubenswrapper[4721]: I0202 13:16:04.417863 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107cfbba-9034-4f35-adf0-801a876e9d52" path="/var/lib/kubelet/pods/107cfbba-9034-4f35-adf0-801a876e9d52/volumes" Feb 02 13:16:04 crc kubenswrapper[4721]: I0202 13:16:04.525830 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ls7f7" event={"ID":"749232df-9bfe-43cb-a716-6eadd2cbc290","Type":"ContainerStarted","Data":"a442f97eabaa9600446ad9cd88843ba6f3cfc81fcbeffbd34a8f940bc84f3190"} Feb 02 13:16:11 crc kubenswrapper[4721]: I0202 13:16:11.599688 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ls7f7" event={"ID":"749232df-9bfe-43cb-a716-6eadd2cbc290","Type":"ContainerStarted","Data":"80a10c759d315ca6b2c4e5baabd5c2724352a80af6e7d18599a82ece805edbbe"} Feb 02 13:16:11 crc kubenswrapper[4721]: I0202 13:16:11.621702 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ls7f7" podStartSLOduration=1.77901154 podStartE2EDuration="8.621678564s" podCreationTimestamp="2026-02-02 13:16:03 +0000 UTC" firstStartedPulling="2026-02-02 13:16:04.360030364 +0000 UTC m=+904.662544753" lastFinishedPulling="2026-02-02 13:16:11.202697388 +0000 UTC m=+911.505211777" observedRunningTime="2026-02-02 13:16:11.617682607 +0000 UTC m=+911.920197016" watchObservedRunningTime="2026-02-02 13:16:11.621678564 +0000 UTC m=+911.924192963" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.733851 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl"] Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.741325 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.743838 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.761183 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl"] Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.833895 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.833969 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.834018 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxlb\" (UniqueName: \"kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.934497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.934560 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.934595 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxlb\" (UniqueName: \"kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.935149 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.935262 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:41 crc kubenswrapper[4721]: I0202 13:16:41.960215 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxlb\" (UniqueName: \"kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:42 crc kubenswrapper[4721]: I0202 13:16:42.062724 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:42 crc kubenswrapper[4721]: I0202 13:16:42.491079 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl"] Feb 02 13:16:42 crc kubenswrapper[4721]: I0202 13:16:42.820395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerStarted","Data":"52dc1a499dcb641663e9ecef3d7f9f4fb35334a80304139e481a1741543ca999"} Feb 02 13:16:42 crc kubenswrapper[4721]: I0202 13:16:42.820803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerStarted","Data":"d3848dde6c1b685fc6b63243fd6d7fb1adb23abdff42209ec7e990c5480d1f60"} Feb 02 13:16:43 crc kubenswrapper[4721]: I0202 13:16:43.828159 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerID="52dc1a499dcb641663e9ecef3d7f9f4fb35334a80304139e481a1741543ca999" exitCode=0 Feb 02 13:16:43 crc kubenswrapper[4721]: I0202 13:16:43.828266 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerDied","Data":"52dc1a499dcb641663e9ecef3d7f9f4fb35334a80304139e481a1741543ca999"} Feb 02 13:16:45 crc kubenswrapper[4721]: I0202 13:16:45.849342 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerID="527f07b7f2fe1f7e6b7d5c148313ddb9f7dcbbbe099ba5a5515ef28eeb0ed2fa" exitCode=0 Feb 02 13:16:45 crc kubenswrapper[4721]: I0202 13:16:45.849522 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerDied","Data":"527f07b7f2fe1f7e6b7d5c148313ddb9f7dcbbbe099ba5a5515ef28eeb0ed2fa"} Feb 02 13:16:46 crc kubenswrapper[4721]: I0202 13:16:46.859013 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerID="191d211ba575e63f5df87768edb2801a7a7407d76c4d1b7a509418e1f62193f2" exitCode=0 Feb 02 13:16:46 crc kubenswrapper[4721]: I0202 13:16:46.859175 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerDied","Data":"191d211ba575e63f5df87768edb2801a7a7407d76c4d1b7a509418e1f62193f2"} Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.239221 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.435501 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxlb\" (UniqueName: \"kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb\") pod \"3d506202-dc87-49f6-9160-ccedb0cbae19\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.436668 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util\") pod \"3d506202-dc87-49f6-9160-ccedb0cbae19\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.436735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle\") pod \"3d506202-dc87-49f6-9160-ccedb0cbae19\" (UID: \"3d506202-dc87-49f6-9160-ccedb0cbae19\") " Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.437680 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle" (OuterVolumeSpecName: "bundle") pod "3d506202-dc87-49f6-9160-ccedb0cbae19" (UID: "3d506202-dc87-49f6-9160-ccedb0cbae19"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.447350 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb" (OuterVolumeSpecName: "kube-api-access-fxxlb") pod "3d506202-dc87-49f6-9160-ccedb0cbae19" (UID: "3d506202-dc87-49f6-9160-ccedb0cbae19"). InnerVolumeSpecName "kube-api-access-fxxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.450312 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util" (OuterVolumeSpecName: "util") pod "3d506202-dc87-49f6-9160-ccedb0cbae19" (UID: "3d506202-dc87-49f6-9160-ccedb0cbae19"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.539969 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxxlb\" (UniqueName: \"kubernetes.io/projected/3d506202-dc87-49f6-9160-ccedb0cbae19-kube-api-access-fxxlb\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.540011 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.540022 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d506202-dc87-49f6-9160-ccedb0cbae19-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.874939 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" event={"ID":"3d506202-dc87-49f6-9160-ccedb0cbae19","Type":"ContainerDied","Data":"d3848dde6c1b685fc6b63243fd6d7fb1adb23abdff42209ec7e990c5480d1f60"} Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.874983 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3848dde6c1b685fc6b63243fd6d7fb1adb23abdff42209ec7e990c5480d1f60" Feb 02 13:16:48 crc kubenswrapper[4721]: I0202 13:16:48.875018 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.776522 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-trhxn"] Feb 02 13:16:51 crc kubenswrapper[4721]: E0202 13:16:51.777465 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="extract" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.777482 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="extract" Feb 02 13:16:51 crc kubenswrapper[4721]: E0202 13:16:51.777501 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="util" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.777509 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="util" Feb 02 13:16:51 crc kubenswrapper[4721]: E0202 13:16:51.777529 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="pull" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.777538 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="pull" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.777713 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d506202-dc87-49f6-9160-ccedb0cbae19" containerName="extract" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.778391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.780325 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.780338 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mp7tk" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.780946 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.794337 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-trhxn"] Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.888236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6tz\" (UniqueName: \"kubernetes.io/projected/38f375ca-8f76-4eb1-a92d-d46f7628ecf6-kube-api-access-9f6tz\") pod \"nmstate-operator-646758c888-trhxn\" (UID: \"38f375ca-8f76-4eb1-a92d-d46f7628ecf6\") " pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" Feb 02 13:16:51 crc kubenswrapper[4721]: I0202 13:16:51.990022 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6tz\" (UniqueName: \"kubernetes.io/projected/38f375ca-8f76-4eb1-a92d-d46f7628ecf6-kube-api-access-9f6tz\") pod \"nmstate-operator-646758c888-trhxn\" (UID: \"38f375ca-8f76-4eb1-a92d-d46f7628ecf6\") " pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" Feb 02 13:16:52 crc kubenswrapper[4721]: I0202 13:16:52.020887 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6tz\" (UniqueName: \"kubernetes.io/projected/38f375ca-8f76-4eb1-a92d-d46f7628ecf6-kube-api-access-9f6tz\") pod \"nmstate-operator-646758c888-trhxn\" (UID: \"38f375ca-8f76-4eb1-a92d-d46f7628ecf6\") " pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" Feb 02 13:16:52 crc kubenswrapper[4721]: I0202 13:16:52.099605 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" Feb 02 13:16:52 crc kubenswrapper[4721]: I0202 13:16:52.588570 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-trhxn"] Feb 02 13:16:52 crc kubenswrapper[4721]: I0202 13:16:52.902979 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" event={"ID":"38f375ca-8f76-4eb1-a92d-d46f7628ecf6","Type":"ContainerStarted","Data":"2ddab410ccbb6b22dcf76c89181dd9a3f1757aee320a55c24ac86d3892d979bd"} Feb 02 13:16:54 crc kubenswrapper[4721]: I0202 13:16:54.925096 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" event={"ID":"38f375ca-8f76-4eb1-a92d-d46f7628ecf6","Type":"ContainerStarted","Data":"016ee087f7d4223d77ed13a94ebcc70893dca997181bea85c835c27571ae8e35"} Feb 02 13:16:54 crc kubenswrapper[4721]: I0202 13:16:54.941808 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-trhxn" podStartSLOduration=2.055682439 podStartE2EDuration="3.941791984s" podCreationTimestamp="2026-02-02 13:16:51 +0000 UTC" firstStartedPulling="2026-02-02 13:16:52.594761554 +0000 UTC m=+952.897275943" lastFinishedPulling="2026-02-02 13:16:54.480871099 +0000 UTC m=+954.783385488" observedRunningTime="2026-02-02 13:16:54.938941068 +0000 UTC m=+955.241455457" watchObservedRunningTime="2026-02-02 13:16:54.941791984 +0000 UTC m=+955.244306383" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.612027 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.613680 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.620912 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mmg2n"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.621971 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.622410 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-drlbp" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.622556 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.641418 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.641480 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mmg2n"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.686863 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dlvcq"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.687821 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.761556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8gh\" (UniqueName: \"kubernetes.io/projected/92d17aed-5894-45b3-8fe9-08b5dfc7c702-kube-api-access-bv8gh\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.761690 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9j8\" (UniqueName: \"kubernetes.io/projected/be1a5420-ea1d-40e0-bd09-241151dc6755-kube-api-access-2q9j8\") pod \"nmstate-metrics-54757c584b-mmg2n\" (UID: \"be1a5420-ea1d-40e0-bd09-241151dc6755\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.761742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.807142 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.808548 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.811310 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-grgnx" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.811507 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.811659 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.839834 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms"] Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864658 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9j8\" (UniqueName: \"kubernetes.io/projected/be1a5420-ea1d-40e0-bd09-241151dc6755-kube-api-access-2q9j8\") pod \"nmstate-metrics-54757c584b-mmg2n\" (UID: \"be1a5420-ea1d-40e0-bd09-241151dc6755\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864728 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864752 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864832 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77cp\" (UniqueName: \"kubernetes.io/projected/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-kube-api-access-g77cp\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864858 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-nmstate-lock\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-ovs-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-dbus-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.864964 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.865039 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8gh\" (UniqueName: \"kubernetes.io/projected/92d17aed-5894-45b3-8fe9-08b5dfc7c702-kube-api-access-bv8gh\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.865178 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqnp\" (UniqueName: \"kubernetes.io/projected/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-kube-api-access-glqnp\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: E0202 13:17:02.865348 4721 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 13:17:02 crc kubenswrapper[4721]: E0202 13:17:02.865420 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair podName:92d17aed-5894-45b3-8fe9-08b5dfc7c702 nodeName:}" failed. No retries permitted until 2026-02-02 13:17:03.365384136 +0000 UTC m=+963.667898525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-j4jzl" (UID: "92d17aed-5894-45b3-8fe9-08b5dfc7c702") : secret "openshift-nmstate-webhook" not found Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.894803 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8gh\" (UniqueName: \"kubernetes.io/projected/92d17aed-5894-45b3-8fe9-08b5dfc7c702-kube-api-access-bv8gh\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.894828 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9j8\" (UniqueName: \"kubernetes.io/projected/be1a5420-ea1d-40e0-bd09-241151dc6755-kube-api-access-2q9j8\") pod \"nmstate-metrics-54757c584b-mmg2n\" (UID: \"be1a5420-ea1d-40e0-bd09-241151dc6755\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969093 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-ovs-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-dbus-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969168 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqnp\" (UniqueName: \"kubernetes.io/projected/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-kube-api-access-glqnp\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969318 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77cp\" (UniqueName: \"kubernetes.io/projected/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-kube-api-access-g77cp\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969333 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-nmstate-lock\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-nmstate-lock\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969434 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-ovs-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.969642 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-dbus-socket\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.972353 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.988247 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqnp\" (UniqueName: \"kubernetes.io/projected/1cf5f077-bb9b-42de-ab25-70b762c3e2e1-kube-api-access-glqnp\") pod \"nmstate-handler-dlvcq\" (UID: \"1cf5f077-bb9b-42de-ab25-70b762c3e2e1\") " pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.992965 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.996991 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77cp\" (UniqueName: \"kubernetes.io/projected/b15ef257-c4ff-4fd9-a04c-a92d38e51b18-kube-api-access-g77cp\") pod \"nmstate-console-plugin-7754f76f8b-2f9ms\" (UID: \"b15ef257-c4ff-4fd9-a04c-a92d38e51b18\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:02 crc kubenswrapper[4721]: I0202 13:17:02.999024 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.027263 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.086485 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.088088 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.105243 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.141497 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279361 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279765 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279803 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhnw\" (UniqueName: \"kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279836 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279942 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.279984 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.280015 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381566 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381621 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhnw\" (UniqueName: \"kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381654 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381733 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381767 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381827 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.381872 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.383003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.383944 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.384967 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.393936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.416719 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92d17aed-5894-45b3-8fe9-08b5dfc7c702-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j4jzl\" (UID: \"92d17aed-5894-45b3-8fe9-08b5dfc7c702\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.416719 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.416925 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhnw\" (UniqueName: \"kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.428912 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config\") pod \"console-679d56c757-8hcnt\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.430049 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.588186 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.612477 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mmg2n"] Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.775276 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms"] Feb 02 13:17:03 crc kubenswrapper[4721]: I0202 13:17:03.902544 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:17:04 crc kubenswrapper[4721]: I0202 13:17:04.008218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dlvcq" event={"ID":"1cf5f077-bb9b-42de-ab25-70b762c3e2e1","Type":"ContainerStarted","Data":"e6d62e3bcaad0869efdc8f536858a527a68a1600b89c91357fcafe050c0e7c1c"} Feb 02 13:17:04 crc kubenswrapper[4721]: I0202 13:17:04.010293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679d56c757-8hcnt" event={"ID":"1e715356-9848-439f-a13d-eb00f34521ec","Type":"ContainerStarted","Data":"d99eec54d3234b8ee9ca1f4e6b988bce26f945deba80a7904e833116e7ebcfe1"} Feb 02 13:17:04 crc kubenswrapper[4721]: I0202 13:17:04.012037 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" event={"ID":"b15ef257-c4ff-4fd9-a04c-a92d38e51b18","Type":"ContainerStarted","Data":"f5df7a958ca3a57c3410e81afbd829f4ca8d15fe78fdbc6bed875f29e9b9702b"} Feb 02 13:17:04 crc kubenswrapper[4721]: I0202 13:17:04.012827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" event={"ID":"be1a5420-ea1d-40e0-bd09-241151dc6755","Type":"ContainerStarted","Data":"5a1ef20cfb826a1d5ddd63f0a19e9f8d9cdd66f781f8f1099ecc33c1377f0be1"} Feb 02 13:17:04 crc kubenswrapper[4721]: I0202 13:17:04.088827 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl"] Feb 02 13:17:04 crc kubenswrapper[4721]: W0202 13:17:04.096608 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92d17aed_5894_45b3_8fe9_08b5dfc7c702.slice/crio-81d4f57384025303abfca59d4299f8c8ec61cd16930ed9f758531ad503fb5fb2 WatchSource:0}: Error finding container 81d4f57384025303abfca59d4299f8c8ec61cd16930ed9f758531ad503fb5fb2: Status 404 returned error can't find the container with id 81d4f57384025303abfca59d4299f8c8ec61cd16930ed9f758531ad503fb5fb2 Feb 02 13:17:05 crc kubenswrapper[4721]: I0202 13:17:05.038304 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679d56c757-8hcnt" event={"ID":"1e715356-9848-439f-a13d-eb00f34521ec","Type":"ContainerStarted","Data":"2105e396598b1fd13640d6e576494052ae1ae901f42a2b1dd0e7f495d1eec506"} Feb 02 13:17:05 crc kubenswrapper[4721]: I0202 13:17:05.043908 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" event={"ID":"92d17aed-5894-45b3-8fe9-08b5dfc7c702","Type":"ContainerStarted","Data":"81d4f57384025303abfca59d4299f8c8ec61cd16930ed9f758531ad503fb5fb2"} Feb 02 13:17:05 crc kubenswrapper[4721]: I0202 13:17:05.061304 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-679d56c757-8hcnt" podStartSLOduration=2.061288791 podStartE2EDuration="2.061288791s" podCreationTimestamp="2026-02-02 13:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:17:05.054932942 +0000 UTC m=+965.357447331" watchObservedRunningTime="2026-02-02 13:17:05.061288791 +0000 UTC m=+965.363803180" Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.067104 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" event={"ID":"be1a5420-ea1d-40e0-bd09-241151dc6755","Type":"ContainerStarted","Data":"e76db2fc53c20700f0800b081757abc8a1bff0a9ba54d9e7e3778d9cdd35190d"} Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.069417 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dlvcq" event={"ID":"1cf5f077-bb9b-42de-ab25-70b762c3e2e1","Type":"ContainerStarted","Data":"88b7c0f5e7f7001f819fe2a158afa2cb6a24a9c28b6b363ff8d409e4b5d9b1d4"} Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.069587 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.071814 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" event={"ID":"92d17aed-5894-45b3-8fe9-08b5dfc7c702","Type":"ContainerStarted","Data":"1a9b0f637a6b22e8f8a09de1c4b8732bc72e65bdab10148e4aa573d3a43f94b1"} Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.072016 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.074579 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" event={"ID":"b15ef257-c4ff-4fd9-a04c-a92d38e51b18","Type":"ContainerStarted","Data":"1d6e2225cab3e5a250021885a92cb8e5bcf5e204e570e588a2e471239e0a7a52"} Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.093077 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dlvcq" podStartSLOduration=1.638088556 podStartE2EDuration="5.093042736s" podCreationTimestamp="2026-02-02 13:17:02 +0000 UTC" firstStartedPulling="2026-02-02 13:17:03.075902036 +0000 UTC m=+963.378416425" lastFinishedPulling="2026-02-02 13:17:06.530856186 +0000 UTC m=+966.833370605" observedRunningTime="2026-02-02 13:17:07.089171342 +0000 UTC m=+967.391685731" watchObservedRunningTime="2026-02-02 13:17:07.093042736 +0000 UTC m=+967.395557125" Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.106487 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2f9ms" podStartSLOduration=2.39822235 podStartE2EDuration="5.106466563s" podCreationTimestamp="2026-02-02 13:17:02 +0000 UTC" firstStartedPulling="2026-02-02 13:17:03.790569287 +0000 UTC m=+964.093083676" lastFinishedPulling="2026-02-02 13:17:06.4988135 +0000 UTC m=+966.801327889" observedRunningTime="2026-02-02 13:17:07.102444636 +0000 UTC m=+967.404959035" watchObservedRunningTime="2026-02-02 13:17:07.106466563 +0000 UTC m=+967.408980952" Feb 02 13:17:07 crc kubenswrapper[4721]: I0202 13:17:07.139552 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" podStartSLOduration=2.7089788759999998 podStartE2EDuration="5.139533046s" podCreationTimestamp="2026-02-02 13:17:02 +0000 UTC" firstStartedPulling="2026-02-02 13:17:04.099031602 +0000 UTC m=+964.401545991" lastFinishedPulling="2026-02-02 13:17:06.529585752 +0000 UTC m=+966.832100161" observedRunningTime="2026-02-02 13:17:07.133640719 +0000 UTC m=+967.436155108" watchObservedRunningTime="2026-02-02 13:17:07.139533046 +0000 UTC m=+967.442047435" Feb 02 13:17:10 crc kubenswrapper[4721]: I0202 13:17:10.102462 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" event={"ID":"be1a5420-ea1d-40e0-bd09-241151dc6755","Type":"ContainerStarted","Data":"755586293cc8fe6b5dfd88bce2edec9ea124a377de4b078172f8d3e7eae575b8"} Feb 02 13:17:10 crc kubenswrapper[4721]: I0202 13:17:10.128651 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mmg2n" podStartSLOduration=2.194615374 podStartE2EDuration="8.128625059s" podCreationTimestamp="2026-02-02 13:17:02 +0000 UTC" firstStartedPulling="2026-02-02 13:17:03.633690138 +0000 UTC m=+963.936204527" lastFinishedPulling="2026-02-02 13:17:09.567699823 +0000 UTC m=+969.870214212" observedRunningTime="2026-02-02 13:17:10.120840461 +0000 UTC m=+970.423354850" watchObservedRunningTime="2026-02-02 13:17:10.128625059 +0000 UTC m=+970.431139458" Feb 02 13:17:13 crc kubenswrapper[4721]: I0202 13:17:13.054228 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dlvcq" Feb 02 13:17:13 crc kubenswrapper[4721]: I0202 13:17:13.432027 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:13 crc kubenswrapper[4721]: I0202 13:17:13.432300 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:13 crc kubenswrapper[4721]: I0202 13:17:13.436621 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:14 crc kubenswrapper[4721]: I0202 13:17:14.131035 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:17:14 crc kubenswrapper[4721]: I0202 13:17:14.187704 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:17:23 crc kubenswrapper[4721]: I0202 13:17:23.594404 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j4jzl" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.238508 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-858d4f646b-v8xpv" podUID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" containerName="console" containerID="cri-o://dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66" gracePeriod=15 Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.702251 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-858d4f646b-v8xpv_5cb1d5e0-e67a-459b-ad6a-794d2f8bab70/console/0.log" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.702616 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795168 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795259 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795373 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795409 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-658px\" (UniqueName: \"kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795467 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795549 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795577 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca\") pod \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\" (UID: \"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70\") " Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795850 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.795844 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config" (OuterVolumeSpecName: "console-config") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.796212 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.796208 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.796265 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca" (OuterVolumeSpecName: "service-ca") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.801334 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.804562 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.805123 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px" (OuterVolumeSpecName: "kube-api-access-658px") pod "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" (UID: "5cb1d5e0-e67a-459b-ad6a-794d2f8bab70"). InnerVolumeSpecName "kube-api-access-658px". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897468 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897506 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897516 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897526 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897534 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:39 crc kubenswrapper[4721]: I0202 13:17:39.897544 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-658px\" (UniqueName: \"kubernetes.io/projected/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70-kube-api-access-658px\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318175 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-858d4f646b-v8xpv_5cb1d5e0-e67a-459b-ad6a-794d2f8bab70/console/0.log" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318223 4721 generic.go:334] "Generic (PLEG): container finished" podID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" containerID="dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66" exitCode=2 Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318253 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-v8xpv" event={"ID":"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70","Type":"ContainerDied","Data":"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66"} Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318279 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-v8xpv" event={"ID":"5cb1d5e0-e67a-459b-ad6a-794d2f8bab70","Type":"ContainerDied","Data":"550ae8a7d8f413e415a65a6ec4a23601971fd9d7d7d542219e95480b30d156d6"} Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318295 4721 scope.go:117] "RemoveContainer" containerID="dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.318305 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-v8xpv" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.344163 4721 scope.go:117] "RemoveContainer" containerID="dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66" Feb 02 13:17:40 crc kubenswrapper[4721]: E0202 13:17:40.346258 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66\": container with ID starting with dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66 not found: ID does not exist" containerID="dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.346293 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66"} err="failed to get container status \"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66\": rpc error: code = NotFound desc = could not find container \"dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66\": container with ID starting with dd8792e47fe91e154df2c809101cbf81ddf00e86f1e989f7de7b429baf8fae66 not found: ID does not exist" Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.355530 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.365997 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-858d4f646b-v8xpv"] Feb 02 13:17:40 crc kubenswrapper[4721]: I0202 13:17:40.419507 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" path="/var/lib/kubelet/pods/5cb1d5e0-e67a-459b-ad6a-794d2f8bab70/volumes" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.051931 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg"] Feb 02 13:17:41 crc kubenswrapper[4721]: E0202 13:17:41.052284 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" containerName="console" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.052300 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" containerName="console" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.052436 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb1d5e0-e67a-459b-ad6a-794d2f8bab70" containerName="console" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.053642 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.056335 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.067309 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg"] Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.219053 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtl8v\" (UniqueName: \"kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.219424 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.219546 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.320741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtl8v\" (UniqueName: \"kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.320818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.320852 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.321415 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.321466 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.340543 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtl8v\" (UniqueName: \"kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.392891 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:41 crc kubenswrapper[4721]: I0202 13:17:41.814319 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg"] Feb 02 13:17:42 crc kubenswrapper[4721]: I0202 13:17:42.338668 4721 generic.go:334] "Generic (PLEG): container finished" podID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerID="28f91195297a776d46f7db103d4985273de948baa29cc9a5e82f3cc4d87a54e0" exitCode=0 Feb 02 13:17:42 crc kubenswrapper[4721]: I0202 13:17:42.338849 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" event={"ID":"6f007b81-04cd-4913-ad24-51aa6c5b60c8","Type":"ContainerDied","Data":"28f91195297a776d46f7db103d4985273de948baa29cc9a5e82f3cc4d87a54e0"} Feb 02 13:17:42 crc kubenswrapper[4721]: I0202 13:17:42.339789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" event={"ID":"6f007b81-04cd-4913-ad24-51aa6c5b60c8","Type":"ContainerStarted","Data":"fcc3c9a977e019475fc2f0060552e4b108760efb6fd036a923df095189c99a45"} Feb 02 13:17:42 crc kubenswrapper[4721]: I0202 13:17:42.340415 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:17:44 crc kubenswrapper[4721]: I0202 13:17:44.354461 4721 generic.go:334] "Generic (PLEG): container finished" podID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerID="83ca642d436fbfde4d4a1af0b2ffeac20b596005b9eeaab9d33b871989acea35" exitCode=0 Feb 02 13:17:44 crc kubenswrapper[4721]: I0202 13:17:44.354551 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" event={"ID":"6f007b81-04cd-4913-ad24-51aa6c5b60c8","Type":"ContainerDied","Data":"83ca642d436fbfde4d4a1af0b2ffeac20b596005b9eeaab9d33b871989acea35"} Feb 02 13:17:44 crc kubenswrapper[4721]: I0202 13:17:44.763747 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:17:44 crc kubenswrapper[4721]: I0202 13:17:44.763810 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:17:45 crc kubenswrapper[4721]: I0202 13:17:45.366012 4721 generic.go:334] "Generic (PLEG): container finished" podID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerID="05c7b1d8da5ec1121b2db7edd10eb8b8157a4017a8a0c59b7beee7bbcbde7f2e" exitCode=0 Feb 02 13:17:45 crc kubenswrapper[4721]: I0202 13:17:45.366117 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" event={"ID":"6f007b81-04cd-4913-ad24-51aa6c5b60c8","Type":"ContainerDied","Data":"05c7b1d8da5ec1121b2db7edd10eb8b8157a4017a8a0c59b7beee7bbcbde7f2e"} Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.670785 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.813125 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle\") pod \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.813179 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtl8v\" (UniqueName: \"kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v\") pod \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.813261 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util\") pod \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\" (UID: \"6f007b81-04cd-4913-ad24-51aa6c5b60c8\") " Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.814777 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle" (OuterVolumeSpecName: "bundle") pod "6f007b81-04cd-4913-ad24-51aa6c5b60c8" (UID: "6f007b81-04cd-4913-ad24-51aa6c5b60c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.819329 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v" (OuterVolumeSpecName: "kube-api-access-gtl8v") pod "6f007b81-04cd-4913-ad24-51aa6c5b60c8" (UID: "6f007b81-04cd-4913-ad24-51aa6c5b60c8"). InnerVolumeSpecName "kube-api-access-gtl8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.829341 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util" (OuterVolumeSpecName: "util") pod "6f007b81-04cd-4913-ad24-51aa6c5b60c8" (UID: "6f007b81-04cd-4913-ad24-51aa6c5b60c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.915381 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.915469 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtl8v\" (UniqueName: \"kubernetes.io/projected/6f007b81-04cd-4913-ad24-51aa6c5b60c8-kube-api-access-gtl8v\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:46 crc kubenswrapper[4721]: I0202 13:17:46.915532 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f007b81-04cd-4913-ad24-51aa6c5b60c8-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:17:47 crc kubenswrapper[4721]: I0202 13:17:47.380129 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" event={"ID":"6f007b81-04cd-4913-ad24-51aa6c5b60c8","Type":"ContainerDied","Data":"fcc3c9a977e019475fc2f0060552e4b108760efb6fd036a923df095189c99a45"} Feb 02 13:17:47 crc kubenswrapper[4721]: I0202 13:17:47.380165 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc3c9a977e019475fc2f0060552e4b108760efb6fd036a923df095189c99a45" Feb 02 13:17:47 crc kubenswrapper[4721]: I0202 13:17:47.380226 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.449864 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz"] Feb 02 13:17:56 crc kubenswrapper[4721]: E0202 13:17:56.450769 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="util" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.450785 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="util" Feb 02 13:17:56 crc kubenswrapper[4721]: E0202 13:17:56.450805 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="extract" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.450812 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="extract" Feb 02 13:17:56 crc kubenswrapper[4721]: E0202 13:17:56.450832 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="pull" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.450841 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="pull" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.451033 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f007b81-04cd-4913-ad24-51aa6c5b60c8" containerName="extract" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.451723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.453722 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dhg9g" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.454350 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.454690 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.454965 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.469512 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.476726 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz"] Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.569095 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-webhook-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.569368 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmbx\" (UniqueName: \"kubernetes.io/projected/4c6e741b-2539-4be0-898c-5fee37f67d21-kube-api-access-kjmbx\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.569696 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-apiservice-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.671153 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-apiservice-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.671218 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-webhook-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.671264 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjmbx\" (UniqueName: \"kubernetes.io/projected/4c6e741b-2539-4be0-898c-5fee37f67d21-kube-api-access-kjmbx\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.682802 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd"] Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.683936 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.685908 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.686046 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.686193 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d48t5" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.688342 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-apiservice-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.689339 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjmbx\" (UniqueName: \"kubernetes.io/projected/4c6e741b-2539-4be0-898c-5fee37f67d21-kube-api-access-kjmbx\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.689887 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e741b-2539-4be0-898c-5fee37f67d21-webhook-cert\") pod \"metallb-operator-controller-manager-67895b6557-xpzcz\" (UID: \"4c6e741b-2539-4be0-898c-5fee37f67d21\") " pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.702578 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd"] Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.770604 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.772629 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-webhook-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.772664 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-apiservice-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.772764 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnfp\" (UniqueName: \"kubernetes.io/projected/10a7b124-f250-42d3-9e7c-af29d7204edb-kube-api-access-4dnfp\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.874855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnfp\" (UniqueName: \"kubernetes.io/projected/10a7b124-f250-42d3-9e7c-af29d7204edb-kube-api-access-4dnfp\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.875306 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-webhook-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.875345 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-apiservice-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.880814 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-apiservice-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.883785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a7b124-f250-42d3-9e7c-af29d7204edb-webhook-cert\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:56 crc kubenswrapper[4721]: I0202 13:17:56.898030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnfp\" (UniqueName: \"kubernetes.io/projected/10a7b124-f250-42d3-9e7c-af29d7204edb-kube-api-access-4dnfp\") pod \"metallb-operator-webhook-server-bb6bc86c6-l2cpd\" (UID: \"10a7b124-f250-42d3-9e7c-af29d7204edb\") " pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:57 crc kubenswrapper[4721]: I0202 13:17:57.072151 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:17:57 crc kubenswrapper[4721]: I0202 13:17:57.374267 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz"] Feb 02 13:17:57 crc kubenswrapper[4721]: I0202 13:17:57.448330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" event={"ID":"4c6e741b-2539-4be0-898c-5fee37f67d21","Type":"ContainerStarted","Data":"a51b6dfaf849b0aa96870de27c5a75504a8a4ffcf6aed84b60bdfcf3507c6156"} Feb 02 13:17:57 crc kubenswrapper[4721]: I0202 13:17:57.458841 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd"] Feb 02 13:17:58 crc kubenswrapper[4721]: I0202 13:17:58.469019 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" event={"ID":"10a7b124-f250-42d3-9e7c-af29d7204edb","Type":"ContainerStarted","Data":"ab0b511b610a9c06890090fa80bc04f289f5e46dd00161f5fce348a49b20787d"} Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.521312 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" event={"ID":"10a7b124-f250-42d3-9e7c-af29d7204edb","Type":"ContainerStarted","Data":"be875bc33c8967c4d3e45833bc59c06ce1a6a4bc3b27d6a16bf8bbc3482ffb1b"} Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.522851 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.524194 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" event={"ID":"4c6e741b-2539-4be0-898c-5fee37f67d21","Type":"ContainerStarted","Data":"c61d42ada705799b12731aca39ecad8f807265698055e33634526ddffcf5831e"} Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.524616 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.547796 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" podStartSLOduration=2.142755597 podStartE2EDuration="6.547775597s" podCreationTimestamp="2026-02-02 13:17:56 +0000 UTC" firstStartedPulling="2026-02-02 13:17:57.465251046 +0000 UTC m=+1017.767765435" lastFinishedPulling="2026-02-02 13:18:01.870271046 +0000 UTC m=+1022.172785435" observedRunningTime="2026-02-02 13:18:02.540923073 +0000 UTC m=+1022.843437462" watchObservedRunningTime="2026-02-02 13:18:02.547775597 +0000 UTC m=+1022.850289986" Feb 02 13:18:02 crc kubenswrapper[4721]: I0202 13:18:02.572242 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" podStartSLOduration=2.118255718 podStartE2EDuration="6.572221784s" podCreationTimestamp="2026-02-02 13:17:56 +0000 UTC" firstStartedPulling="2026-02-02 13:17:57.396534718 +0000 UTC m=+1017.699049107" lastFinishedPulling="2026-02-02 13:18:01.850500784 +0000 UTC m=+1022.153015173" observedRunningTime="2026-02-02 13:18:02.562312708 +0000 UTC m=+1022.864827127" watchObservedRunningTime="2026-02-02 13:18:02.572221784 +0000 UTC m=+1022.874736183" Feb 02 13:18:14 crc kubenswrapper[4721]: I0202 13:18:14.764803 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:18:14 crc kubenswrapper[4721]: I0202 13:18:14.765468 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:18:17 crc kubenswrapper[4721]: I0202 13:18:17.078571 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bb6bc86c6-l2cpd" Feb 02 13:18:36 crc kubenswrapper[4721]: I0202 13:18:36.780990 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67895b6557-xpzcz" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.436351 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8ts6n"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.439231 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.441301 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.441454 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.441532 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7k24k" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.451942 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.452897 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.454621 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.465052 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.498643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.498905 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-conf\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.498949 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-sockets\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.498987 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfqg\" (UniqueName: \"kubernetes.io/projected/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-kube-api-access-mwfqg\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.499044 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-reloader\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.499096 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wbp\" (UniqueName: \"kubernetes.io/projected/5f685485-23a9-45dd-90cd-62ab47eab713-kube-api-access-n2wbp\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.499120 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-metrics\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.499139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.499171 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f685485-23a9-45dd-90cd-62ab47eab713-frr-startup\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.548189 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2hhvl"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.549693 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.551288 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.551356 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.551601 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kmfnz" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.551636 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.566308 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-rq76j"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.567612 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.571646 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611619 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-reloader\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wbp\" (UniqueName: \"kubernetes.io/projected/5f685485-23a9-45dd-90cd-62ab47eab713-kube-api-access-n2wbp\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611781 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-metrics\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f685485-23a9-45dd-90cd-62ab47eab713-frr-startup\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611966 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.611996 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-conf\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.612045 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-sockets\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.612109 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwfqg\" (UniqueName: \"kubernetes.io/projected/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-kube-api-access-mwfqg\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.612554 4721 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.612566 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-reloader\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.612618 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert podName:4fda33e0-d0a3-4266-aeb1-fc07965d8c35 nodeName:}" failed. No retries permitted until 2026-02-02 13:18:38.112596932 +0000 UTC m=+1058.415111321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert") pod "frr-k8s-webhook-server-7df86c4f6c-4t8pn" (UID: "4fda33e0-d0a3-4266-aeb1-fc07965d8c35") : secret "frr-k8s-webhook-server-cert" not found Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.612911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-metrics\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.613133 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-conf\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.613179 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f685485-23a9-45dd-90cd-62ab47eab713-frr-sockets\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.613204 4721 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.613229 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs podName:5f685485-23a9-45dd-90cd-62ab47eab713 nodeName:}" failed. No retries permitted until 2026-02-02 13:18:38.113221938 +0000 UTC m=+1058.415736327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs") pod "frr-k8s-8ts6n" (UID: "5f685485-23a9-45dd-90cd-62ab47eab713") : secret "frr-k8s-certs-secret" not found Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.616651 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rq76j"] Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.616912 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f685485-23a9-45dd-90cd-62ab47eab713-frr-startup\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.635033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wbp\" (UniqueName: \"kubernetes.io/projected/5f685485-23a9-45dd-90cd-62ab47eab713-kube-api-access-n2wbp\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.660431 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwfqg\" (UniqueName: \"kubernetes.io/projected/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-kube-api-access-mwfqg\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.713878 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.713956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metallb-excludel2\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.714015 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.714036 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/486fb2e8-15fe-46c1-b62c-89f2b2abf064-kube-api-access-4shm8\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.714184 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-cert\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.714220 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz9b8\" (UniqueName: \"kubernetes.io/projected/d8fb94c8-b6a7-47c1-bf64-c01350b47983-kube-api-access-tz9b8\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.714240 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metrics-certs\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.815508 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metallb-excludel2\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.815592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.815620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/486fb2e8-15fe-46c1-b62c-89f2b2abf064-kube-api-access-4shm8\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.815706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-cert\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.815776 4721 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.815866 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs podName:d8fb94c8-b6a7-47c1-bf64-c01350b47983 nodeName:}" failed. No retries permitted until 2026-02-02 13:18:38.315839737 +0000 UTC m=+1058.618354206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs") pod "controller-6968d8fdc4-rq76j" (UID: "d8fb94c8-b6a7-47c1-bf64-c01350b47983") : secret "controller-certs-secret" not found Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.816032 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9b8\" (UniqueName: \"kubernetes.io/projected/d8fb94c8-b6a7-47c1-bf64-c01350b47983-kube-api-access-tz9b8\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.816108 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metrics-certs\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.816252 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.816455 4721 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 13:18:37 crc kubenswrapper[4721]: E0202 13:18:37.816502 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist podName:486fb2e8-15fe-46c1-b62c-89f2b2abf064 nodeName:}" failed. No retries permitted until 2026-02-02 13:18:38.316486694 +0000 UTC m=+1058.619001083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist") pod "speaker-2hhvl" (UID: "486fb2e8-15fe-46c1-b62c-89f2b2abf064") : secret "metallb-memberlist" not found Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.817237 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metallb-excludel2\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.818608 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.821612 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-metrics-certs\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.831910 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-cert\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.838333 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4shm8\" (UniqueName: \"kubernetes.io/projected/486fb2e8-15fe-46c1-b62c-89f2b2abf064-kube-api-access-4shm8\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:37 crc kubenswrapper[4721]: I0202 13:18:37.841662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9b8\" (UniqueName: \"kubernetes.io/projected/d8fb94c8-b6a7-47c1-bf64-c01350b47983-kube-api-access-tz9b8\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.121015 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.121156 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.124570 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f685485-23a9-45dd-90cd-62ab47eab713-metrics-certs\") pod \"frr-k8s-8ts6n\" (UID: \"5f685485-23a9-45dd-90cd-62ab47eab713\") " pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.124697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fda33e0-d0a3-4266-aeb1-fc07965d8c35-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4t8pn\" (UID: \"4fda33e0-d0a3-4266-aeb1-fc07965d8c35\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.324057 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.324214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:38 crc kubenswrapper[4721]: E0202 13:18:38.324228 4721 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 13:18:38 crc kubenswrapper[4721]: E0202 13:18:38.324294 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist podName:486fb2e8-15fe-46c1-b62c-89f2b2abf064 nodeName:}" failed. No retries permitted until 2026-02-02 13:18:39.324276561 +0000 UTC m=+1059.626790950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist") pod "speaker-2hhvl" (UID: "486fb2e8-15fe-46c1-b62c-89f2b2abf064") : secret "metallb-memberlist" not found Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.327882 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8fb94c8-b6a7-47c1-bf64-c01350b47983-metrics-certs\") pod \"controller-6968d8fdc4-rq76j\" (UID: \"d8fb94c8-b6a7-47c1-bf64-c01350b47983\") " pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.370574 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.377335 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.491372 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.787204 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn"] Feb 02 13:18:38 crc kubenswrapper[4721]: I0202 13:18:38.788742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"3c527d57a08a523721a1d2de7391df538f635da0ad2beb4688f31b450cf2070b"} Feb 02 13:18:38 crc kubenswrapper[4721]: W0202 13:18:38.792744 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fda33e0_d0a3_4266_aeb1_fc07965d8c35.slice/crio-22e4ed4f177ecfb7f4d8515b57e888c9baa5330adfb250d0dc3d7cb511d4ba6e WatchSource:0}: Error finding container 22e4ed4f177ecfb7f4d8515b57e888c9baa5330adfb250d0dc3d7cb511d4ba6e: Status 404 returned error can't find the container with id 22e4ed4f177ecfb7f4d8515b57e888c9baa5330adfb250d0dc3d7cb511d4ba6e Feb 02 13:18:39 crc kubenswrapper[4721]: W0202 13:18:39.015624 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8fb94c8_b6a7_47c1_bf64_c01350b47983.slice/crio-e2a18b619088f9a7400300bce19947499337bdf619bcac5b0697c49046b670ce WatchSource:0}: Error finding container e2a18b619088f9a7400300bce19947499337bdf619bcac5b0697c49046b670ce: Status 404 returned error can't find the container with id e2a18b619088f9a7400300bce19947499337bdf619bcac5b0697c49046b670ce Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.017189 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rq76j"] Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.342872 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.351889 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/486fb2e8-15fe-46c1-b62c-89f2b2abf064-memberlist\") pod \"speaker-2hhvl\" (UID: \"486fb2e8-15fe-46c1-b62c-89f2b2abf064\") " pod="metallb-system/speaker-2hhvl" Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.367796 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hhvl" Feb 02 13:18:39 crc kubenswrapper[4721]: W0202 13:18:39.411074 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod486fb2e8_15fe_46c1_b62c_89f2b2abf064.slice/crio-b3e8d397e687e845a1179eb25a3f460b85adfe038222c9a0ebf9a5837ef9f753 WatchSource:0}: Error finding container b3e8d397e687e845a1179eb25a3f460b85adfe038222c9a0ebf9a5837ef9f753: Status 404 returned error can't find the container with id b3e8d397e687e845a1179eb25a3f460b85adfe038222c9a0ebf9a5837ef9f753 Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.808203 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" event={"ID":"4fda33e0-d0a3-4266-aeb1-fc07965d8c35","Type":"ContainerStarted","Data":"22e4ed4f177ecfb7f4d8515b57e888c9baa5330adfb250d0dc3d7cb511d4ba6e"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.812150 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hhvl" event={"ID":"486fb2e8-15fe-46c1-b62c-89f2b2abf064","Type":"ContainerStarted","Data":"4e01c83cc4931fdc3383462b8c5d90108b3cba3fc105a860ae6d9f279d62b7b8"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.812190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hhvl" event={"ID":"486fb2e8-15fe-46c1-b62c-89f2b2abf064","Type":"ContainerStarted","Data":"b3e8d397e687e845a1179eb25a3f460b85adfe038222c9a0ebf9a5837ef9f753"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.815447 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rq76j" event={"ID":"d8fb94c8-b6a7-47c1-bf64-c01350b47983","Type":"ContainerStarted","Data":"d8803cd645e8d946431f086660924d525d3399b7fecc9d2feb715e6cf0f502d3"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.815473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rq76j" event={"ID":"d8fb94c8-b6a7-47c1-bf64-c01350b47983","Type":"ContainerStarted","Data":"3c1022a20bdd14d59f969f522a603244b11617cfc63c8b6f56b86319e605b77f"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.815483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rq76j" event={"ID":"d8fb94c8-b6a7-47c1-bf64-c01350b47983","Type":"ContainerStarted","Data":"e2a18b619088f9a7400300bce19947499337bdf619bcac5b0697c49046b670ce"} Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.816350 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:39 crc kubenswrapper[4721]: I0202 13:18:39.850302 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-rq76j" podStartSLOduration=2.850283362 podStartE2EDuration="2.850283362s" podCreationTimestamp="2026-02-02 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:39.845835523 +0000 UTC m=+1060.148349922" watchObservedRunningTime="2026-02-02 13:18:39.850283362 +0000 UTC m=+1060.152797751" Feb 02 13:18:40 crc kubenswrapper[4721]: I0202 13:18:40.830171 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hhvl" event={"ID":"486fb2e8-15fe-46c1-b62c-89f2b2abf064","Type":"ContainerStarted","Data":"20db97f9c0ed2d54ccc6d3e4d3d55445f8b7bc1df45967185d9d66a3718eaa3d"} Feb 02 13:18:40 crc kubenswrapper[4721]: I0202 13:18:40.856618 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2hhvl" podStartSLOduration=3.856599817 podStartE2EDuration="3.856599817s" podCreationTimestamp="2026-02-02 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:18:40.849958728 +0000 UTC m=+1061.152473127" watchObservedRunningTime="2026-02-02 13:18:40.856599817 +0000 UTC m=+1061.159114206" Feb 02 13:18:41 crc kubenswrapper[4721]: I0202 13:18:41.840044 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2hhvl" Feb 02 13:18:44 crc kubenswrapper[4721]: I0202 13:18:44.763545 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:18:44 crc kubenswrapper[4721]: I0202 13:18:44.764255 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:18:44 crc kubenswrapper[4721]: I0202 13:18:44.764320 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:18:44 crc kubenswrapper[4721]: I0202 13:18:44.882171 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:18:44 crc kubenswrapper[4721]: I0202 13:18:44.882255 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66" gracePeriod=600 Feb 02 13:18:45 crc kubenswrapper[4721]: I0202 13:18:45.892038 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66" exitCode=0 Feb 02 13:18:45 crc kubenswrapper[4721]: I0202 13:18:45.892116 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66"} Feb 02 13:18:45 crc kubenswrapper[4721]: I0202 13:18:45.892378 4721 scope.go:117] "RemoveContainer" containerID="4e271bf7e19d8205d47335a427c173d1e8d60e0f2a6167b224679306973cc1cc" Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.901459 4721 generic.go:334] "Generic (PLEG): container finished" podID="5f685485-23a9-45dd-90cd-62ab47eab713" containerID="03d0fc817d886ad0f0ee26a111fa8a39a98869ad531602532cb9fb0031c9ea49" exitCode=0 Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.901500 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerDied","Data":"03d0fc817d886ad0f0ee26a111fa8a39a98869ad531602532cb9fb0031c9ea49"} Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.903279 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" event={"ID":"4fda33e0-d0a3-4266-aeb1-fc07965d8c35","Type":"ContainerStarted","Data":"315eeb7e4fb2b03ec8db702044e7c5ba8604f9647e470872dd5a289ffe0ce83a"} Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.903568 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.906388 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513"} Feb 02 13:18:46 crc kubenswrapper[4721]: I0202 13:18:46.958020 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" podStartSLOduration=2.978323505 podStartE2EDuration="9.958003139s" podCreationTimestamp="2026-02-02 13:18:37 +0000 UTC" firstStartedPulling="2026-02-02 13:18:38.799128932 +0000 UTC m=+1059.101643321" lastFinishedPulling="2026-02-02 13:18:45.778808566 +0000 UTC m=+1066.081322955" observedRunningTime="2026-02-02 13:18:46.953460047 +0000 UTC m=+1067.255974436" watchObservedRunningTime="2026-02-02 13:18:46.958003139 +0000 UTC m=+1067.260517528" Feb 02 13:18:47 crc kubenswrapper[4721]: I0202 13:18:47.916496 4721 generic.go:334] "Generic (PLEG): container finished" podID="5f685485-23a9-45dd-90cd-62ab47eab713" containerID="1dd6f8542f878a906dd085b4da23ddb4399ef98f555265fd1c4109c17abb988b" exitCode=0 Feb 02 13:18:47 crc kubenswrapper[4721]: I0202 13:18:47.916560 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerDied","Data":"1dd6f8542f878a906dd085b4da23ddb4399ef98f555265fd1c4109c17abb988b"} Feb 02 13:18:48 crc kubenswrapper[4721]: E0202 13:18:48.192011 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f685485_23a9_45dd_90cd_62ab47eab713.slice/crio-conmon-64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f685485_23a9_45dd_90cd_62ab47eab713.slice/crio-64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:18:48 crc kubenswrapper[4721]: E0202 13:18:48.192134 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f685485_23a9_45dd_90cd_62ab47eab713.slice/crio-64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f685485_23a9_45dd_90cd_62ab47eab713.slice/crio-conmon-64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:18:49 crc kubenswrapper[4721]: I0202 13:18:49.141018 4721 generic.go:334] "Generic (PLEG): container finished" podID="5f685485-23a9-45dd-90cd-62ab47eab713" containerID="64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852" exitCode=0 Feb 02 13:18:49 crc kubenswrapper[4721]: I0202 13:18:49.141126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerDied","Data":"64bb773d72a6c4efd1a2ed88c4109619b1ebdec1a5b49cd18c11cc35b997b852"} Feb 02 13:18:49 crc kubenswrapper[4721]: I0202 13:18:49.373195 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2hhvl" Feb 02 13:18:50 crc kubenswrapper[4721]: I0202 13:18:50.155867 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"5e48b3a0ec805fb51069f290912489d96cb60f8b72eae9fa99a8c8fc7c14f2be"} Feb 02 13:18:50 crc kubenswrapper[4721]: I0202 13:18:50.156198 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"a64fa8b85d8bbf563e1cc9518dd9eb6c3a897f8cd2422104902533866289d43c"} Feb 02 13:18:50 crc kubenswrapper[4721]: I0202 13:18:50.156229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"c6adc3f6a792763cfb4a522e36a65254e33d17da87f5bce9cff96ee64cdba71a"} Feb 02 13:18:50 crc kubenswrapper[4721]: I0202 13:18:50.156240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"a6a02cd5e1ffaadb75893b390ecd635f00187ec0edde1848d141cf08f58a4b71"} Feb 02 13:18:50 crc kubenswrapper[4721]: I0202 13:18:50.156248 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"62bb24741d74bc1b1630da97df0f7bd565db0465def4b5400f8055ee3f909bef"} Feb 02 13:18:51 crc kubenswrapper[4721]: I0202 13:18:51.171778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8ts6n" event={"ID":"5f685485-23a9-45dd-90cd-62ab47eab713","Type":"ContainerStarted","Data":"7639a0f645b1feb6aeacbacb7cffe21d371b2fa359e6af58f1d5c26c1e543e96"} Feb 02 13:18:51 crc kubenswrapper[4721]: I0202 13:18:51.173620 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:51 crc kubenswrapper[4721]: I0202 13:18:51.194125 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8ts6n" podStartSLOduration=6.948643856 podStartE2EDuration="14.194102957s" podCreationTimestamp="2026-02-02 13:18:37 +0000 UTC" firstStartedPulling="2026-02-02 13:18:38.509993456 +0000 UTC m=+1058.812507845" lastFinishedPulling="2026-02-02 13:18:45.755452557 +0000 UTC m=+1066.057966946" observedRunningTime="2026-02-02 13:18:51.193317445 +0000 UTC m=+1071.495831834" watchObservedRunningTime="2026-02-02 13:18:51.194102957 +0000 UTC m=+1071.496617366" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.092541 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.094158 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.095667 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-f4l7g" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.097064 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.097283 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.105398 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.182426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdsw8\" (UniqueName: \"kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8\") pod \"openstack-operator-index-m68ht\" (UID: \"0014b6b6-c71c-4e95-8297-3eb2fdc64a74\") " pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.284290 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdsw8\" (UniqueName: \"kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8\") pod \"openstack-operator-index-m68ht\" (UID: \"0014b6b6-c71c-4e95-8297-3eb2fdc64a74\") " pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.304839 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdsw8\" (UniqueName: \"kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8\") pod \"openstack-operator-index-m68ht\" (UID: \"0014b6b6-c71c-4e95-8297-3eb2fdc64a74\") " pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.413128 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:52 crc kubenswrapper[4721]: I0202 13:18:52.843461 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:18:53 crc kubenswrapper[4721]: I0202 13:18:53.188540 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m68ht" event={"ID":"0014b6b6-c71c-4e95-8297-3eb2fdc64a74","Type":"ContainerStarted","Data":"3249ab65e4b96c53173fa3e46efe93bbc8beaf0a9ad7e6942999d6268e128321"} Feb 02 13:18:53 crc kubenswrapper[4721]: I0202 13:18:53.371920 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:53 crc kubenswrapper[4721]: I0202 13:18:53.409755 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:18:56 crc kubenswrapper[4721]: I0202 13:18:56.215127 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m68ht" event={"ID":"0014b6b6-c71c-4e95-8297-3eb2fdc64a74","Type":"ContainerStarted","Data":"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb"} Feb 02 13:18:56 crc kubenswrapper[4721]: I0202 13:18:56.229891 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m68ht" podStartSLOduration=1.6442831820000001 podStartE2EDuration="4.229863669s" podCreationTimestamp="2026-02-02 13:18:52 +0000 UTC" firstStartedPulling="2026-02-02 13:18:52.854540703 +0000 UTC m=+1073.157055092" lastFinishedPulling="2026-02-02 13:18:55.44012119 +0000 UTC m=+1075.742635579" observedRunningTime="2026-02-02 13:18:56.227875606 +0000 UTC m=+1076.530390035" watchObservedRunningTime="2026-02-02 13:18:56.229863669 +0000 UTC m=+1076.532378108" Feb 02 13:18:56 crc kubenswrapper[4721]: I0202 13:18:56.258111 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.001242 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lxqsx"] Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.003334 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.012871 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lxqsx"] Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.058719 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5b94\" (UniqueName: \"kubernetes.io/projected/abf13eed-433d-4afa-809d-bd863e469366-kube-api-access-b5b94\") pod \"openstack-operator-index-lxqsx\" (UID: \"abf13eed-433d-4afa-809d-bd863e469366\") " pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.160621 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5b94\" (UniqueName: \"kubernetes.io/projected/abf13eed-433d-4afa-809d-bd863e469366-kube-api-access-b5b94\") pod \"openstack-operator-index-lxqsx\" (UID: \"abf13eed-433d-4afa-809d-bd863e469366\") " pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.179564 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5b94\" (UniqueName: \"kubernetes.io/projected/abf13eed-433d-4afa-809d-bd863e469366-kube-api-access-b5b94\") pod \"openstack-operator-index-lxqsx\" (UID: \"abf13eed-433d-4afa-809d-bd863e469366\") " pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.326257 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:18:57 crc kubenswrapper[4721]: I0202 13:18:57.723065 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lxqsx"] Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.233555 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lxqsx" event={"ID":"abf13eed-433d-4afa-809d-bd863e469366","Type":"ContainerStarted","Data":"78af90db2a894f83627a56f1ee1bfc25c6064e4003435476a66e6e31796176d2"} Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.233673 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-m68ht" podUID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" containerName="registry-server" containerID="cri-o://da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb" gracePeriod=2 Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.233634 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lxqsx" event={"ID":"abf13eed-433d-4afa-809d-bd863e469366","Type":"ContainerStarted","Data":"7b001626c00bd24664dce0b373d83fc851a0cfda003589d7497034a9c0d8d9ae"} Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.254616 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lxqsx" podStartSLOduration=2.200363574 podStartE2EDuration="2.254595923s" podCreationTimestamp="2026-02-02 13:18:56 +0000 UTC" firstStartedPulling="2026-02-02 13:18:57.726415228 +0000 UTC m=+1078.028929617" lastFinishedPulling="2026-02-02 13:18:57.780647577 +0000 UTC m=+1078.083161966" observedRunningTime="2026-02-02 13:18:58.247058911 +0000 UTC m=+1078.549573310" watchObservedRunningTime="2026-02-02 13:18:58.254595923 +0000 UTC m=+1078.557110322" Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.393229 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4t8pn" Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.496831 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-rq76j" Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.714356 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.892583 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdsw8\" (UniqueName: \"kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8\") pod \"0014b6b6-c71c-4e95-8297-3eb2fdc64a74\" (UID: \"0014b6b6-c71c-4e95-8297-3eb2fdc64a74\") " Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.898692 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8" (OuterVolumeSpecName: "kube-api-access-kdsw8") pod "0014b6b6-c71c-4e95-8297-3eb2fdc64a74" (UID: "0014b6b6-c71c-4e95-8297-3eb2fdc64a74"). InnerVolumeSpecName "kube-api-access-kdsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:18:58 crc kubenswrapper[4721]: I0202 13:18:58.994086 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdsw8\" (UniqueName: \"kubernetes.io/projected/0014b6b6-c71c-4e95-8297-3eb2fdc64a74-kube-api-access-kdsw8\") on node \"crc\" DevicePath \"\"" Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.240779 4721 generic.go:334] "Generic (PLEG): container finished" podID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" containerID="da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb" exitCode=0 Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.240868 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m68ht" Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.240876 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m68ht" event={"ID":"0014b6b6-c71c-4e95-8297-3eb2fdc64a74","Type":"ContainerDied","Data":"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb"} Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.240933 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m68ht" event={"ID":"0014b6b6-c71c-4e95-8297-3eb2fdc64a74","Type":"ContainerDied","Data":"3249ab65e4b96c53173fa3e46efe93bbc8beaf0a9ad7e6942999d6268e128321"} Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.240951 4721 scope.go:117] "RemoveContainer" containerID="da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb" Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.258913 4721 scope.go:117] "RemoveContainer" containerID="da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb" Feb 02 13:18:59 crc kubenswrapper[4721]: E0202 13:18:59.259419 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb\": container with ID starting with da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb not found: ID does not exist" containerID="da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb" Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.259451 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb"} err="failed to get container status \"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb\": rpc error: code = NotFound desc = could not find container \"da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb\": container with ID starting with da2a1fef91a542e480ee355b06c71667fc75ede6837e2adeb932832367662bcb not found: ID does not exist" Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.274422 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:18:59 crc kubenswrapper[4721]: I0202 13:18:59.280378 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-m68ht"] Feb 02 13:19:00 crc kubenswrapper[4721]: I0202 13:19:00.418827 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" path="/var/lib/kubelet/pods/0014b6b6-c71c-4e95-8297-3eb2fdc64a74/volumes" Feb 02 13:19:07 crc kubenswrapper[4721]: I0202 13:19:07.326847 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:19:07 crc kubenswrapper[4721]: I0202 13:19:07.327396 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:19:07 crc kubenswrapper[4721]: I0202 13:19:07.365247 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:19:08 crc kubenswrapper[4721]: I0202 13:19:08.336801 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lxqsx" Feb 02 13:19:08 crc kubenswrapper[4721]: I0202 13:19:08.374244 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8ts6n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.115707 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n"] Feb 02 13:19:15 crc kubenswrapper[4721]: E0202 13:19:15.116582 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" containerName="registry-server" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.116595 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" containerName="registry-server" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.116931 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0014b6b6-c71c-4e95-8297-3eb2fdc64a74" containerName="registry-server" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.118253 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.122231 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ch7pw" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.124003 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n"] Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.191445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckmx9\" (UniqueName: \"kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.191514 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.191594 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.293835 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.294030 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckmx9\" (UniqueName: \"kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.294058 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.294631 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.296557 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.318679 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckmx9\" (UniqueName: \"kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9\") pod \"37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.444589 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:15 crc kubenswrapper[4721]: I0202 13:19:15.859012 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n"] Feb 02 13:19:15 crc kubenswrapper[4721]: W0202 13:19:15.869292 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c48ead1_c06b_4a13_b92a_ce7a474e6233.slice/crio-d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372 WatchSource:0}: Error finding container d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372: Status 404 returned error can't find the container with id d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372 Feb 02 13:19:16 crc kubenswrapper[4721]: I0202 13:19:16.368165 4721 generic.go:334] "Generic (PLEG): container finished" podID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerID="a175b67ada1bce963fa1271e6de88c07606188a982974922197fec921731eed6" exitCode=0 Feb 02 13:19:16 crc kubenswrapper[4721]: I0202 13:19:16.368214 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" event={"ID":"5c48ead1-c06b-4a13-b92a-ce7a474e6233","Type":"ContainerDied","Data":"a175b67ada1bce963fa1271e6de88c07606188a982974922197fec921731eed6"} Feb 02 13:19:16 crc kubenswrapper[4721]: I0202 13:19:16.368245 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" event={"ID":"5c48ead1-c06b-4a13-b92a-ce7a474e6233","Type":"ContainerStarted","Data":"d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372"} Feb 02 13:19:17 crc kubenswrapper[4721]: I0202 13:19:17.377806 4721 generic.go:334] "Generic (PLEG): container finished" podID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerID="87ed60fc00e9a438b589227413ad2523c5bbe2c862bd494d5c233bc88185aa3e" exitCode=0 Feb 02 13:19:17 crc kubenswrapper[4721]: I0202 13:19:17.377888 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" event={"ID":"5c48ead1-c06b-4a13-b92a-ce7a474e6233","Type":"ContainerDied","Data":"87ed60fc00e9a438b589227413ad2523c5bbe2c862bd494d5c233bc88185aa3e"} Feb 02 13:19:18 crc kubenswrapper[4721]: I0202 13:19:18.387744 4721 generic.go:334] "Generic (PLEG): container finished" podID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerID="0f4fa58b324d00f07fa91a0d980dd156f0ca293cb3afe96b229c216b4a1ef522" exitCode=0 Feb 02 13:19:18 crc kubenswrapper[4721]: I0202 13:19:18.387797 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" event={"ID":"5c48ead1-c06b-4a13-b92a-ce7a474e6233","Type":"ContainerDied","Data":"0f4fa58b324d00f07fa91a0d980dd156f0ca293cb3afe96b229c216b4a1ef522"} Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.701437 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.770606 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle\") pod \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.770687 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckmx9\" (UniqueName: \"kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9\") pod \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.770724 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util\") pod \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\" (UID: \"5c48ead1-c06b-4a13-b92a-ce7a474e6233\") " Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.771898 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle" (OuterVolumeSpecName: "bundle") pod "5c48ead1-c06b-4a13-b92a-ce7a474e6233" (UID: "5c48ead1-c06b-4a13-b92a-ce7a474e6233"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.776682 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9" (OuterVolumeSpecName: "kube-api-access-ckmx9") pod "5c48ead1-c06b-4a13-b92a-ce7a474e6233" (UID: "5c48ead1-c06b-4a13-b92a-ce7a474e6233"). InnerVolumeSpecName "kube-api-access-ckmx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.787897 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util" (OuterVolumeSpecName: "util") pod "5c48ead1-c06b-4a13-b92a-ce7a474e6233" (UID: "5c48ead1-c06b-4a13-b92a-ce7a474e6233"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.872197 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.872237 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckmx9\" (UniqueName: \"kubernetes.io/projected/5c48ead1-c06b-4a13-b92a-ce7a474e6233-kube-api-access-ckmx9\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:19 crc kubenswrapper[4721]: I0202 13:19:19.872252 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c48ead1-c06b-4a13-b92a-ce7a474e6233-util\") on node \"crc\" DevicePath \"\"" Feb 02 13:19:20 crc kubenswrapper[4721]: I0202 13:19:20.406778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" event={"ID":"5c48ead1-c06b-4a13-b92a-ce7a474e6233","Type":"ContainerDied","Data":"d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372"} Feb 02 13:19:20 crc kubenswrapper[4721]: I0202 13:19:20.406839 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n" Feb 02 13:19:20 crc kubenswrapper[4721]: I0202 13:19:20.406862 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d8199e845beed805cad86e56edb529277d411808ef589eb129d8a543059372" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.968998 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl"] Feb 02 13:19:25 crc kubenswrapper[4721]: E0202 13:19:25.970663 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="util" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.970743 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="util" Feb 02 13:19:25 crc kubenswrapper[4721]: E0202 13:19:25.970806 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="extract" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.970885 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="extract" Feb 02 13:19:25 crc kubenswrapper[4721]: E0202 13:19:25.970953 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="pull" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.971014 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="pull" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.971268 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c48ead1-c06b-4a13-b92a-ce7a474e6233" containerName="extract" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.972013 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:25 crc kubenswrapper[4721]: I0202 13:19:25.975487 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-f9rpt" Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.004229 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl"] Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.075307 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dt7x\" (UniqueName: \"kubernetes.io/projected/e4514067-762e-4638-ad5a-a7d17297bc0d-kube-api-access-7dt7x\") pod \"openstack-operator-controller-init-b64b9f5cb-mqpbl\" (UID: \"e4514067-762e-4638-ad5a-a7d17297bc0d\") " pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.177583 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dt7x\" (UniqueName: \"kubernetes.io/projected/e4514067-762e-4638-ad5a-a7d17297bc0d-kube-api-access-7dt7x\") pod \"openstack-operator-controller-init-b64b9f5cb-mqpbl\" (UID: \"e4514067-762e-4638-ad5a-a7d17297bc0d\") " pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.208960 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dt7x\" (UniqueName: \"kubernetes.io/projected/e4514067-762e-4638-ad5a-a7d17297bc0d-kube-api-access-7dt7x\") pod \"openstack-operator-controller-init-b64b9f5cb-mqpbl\" (UID: \"e4514067-762e-4638-ad5a-a7d17297bc0d\") " pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.295059 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:26 crc kubenswrapper[4721]: I0202 13:19:26.754449 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl"] Feb 02 13:19:27 crc kubenswrapper[4721]: I0202 13:19:27.459514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" event={"ID":"e4514067-762e-4638-ad5a-a7d17297bc0d","Type":"ContainerStarted","Data":"29d5dae63b32855362a786b7f3dddc85373ac96887f27f9ff95144eeaf71899d"} Feb 02 13:19:31 crc kubenswrapper[4721]: I0202 13:19:31.501783 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" event={"ID":"e4514067-762e-4638-ad5a-a7d17297bc0d","Type":"ContainerStarted","Data":"482c43c1cef1082dfd20266dc55096aad6c262195b47f36e5f97c82bfce4c18c"} Feb 02 13:19:31 crc kubenswrapper[4721]: I0202 13:19:31.502401 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:19:31 crc kubenswrapper[4721]: I0202 13:19:31.533625 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" podStartSLOduration=2.722034888 podStartE2EDuration="6.533608828s" podCreationTimestamp="2026-02-02 13:19:25 +0000 UTC" firstStartedPulling="2026-02-02 13:19:26.765085274 +0000 UTC m=+1107.067599653" lastFinishedPulling="2026-02-02 13:19:30.576659204 +0000 UTC m=+1110.879173593" observedRunningTime="2026-02-02 13:19:31.525107988 +0000 UTC m=+1111.827622377" watchObservedRunningTime="2026-02-02 13:19:31.533608828 +0000 UTC m=+1111.836123217" Feb 02 13:19:36 crc kubenswrapper[4721]: I0202 13:19:36.311339 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b64b9f5cb-mqpbl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.043684 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.045316 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.048219 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dmzhd" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.058755 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.065500 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.066562 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.068207 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-chjw6" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.080623 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.082141 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.101183 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.120677 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6g7pw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.144919 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.146611 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.151221 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7hb8c" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.155699 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.181246 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.196240 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.197306 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.204605 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g6t6m" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.213200 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.239351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p959t\" (UniqueName: \"kubernetes.io/projected/20f771bf-d003-48b0-8e50-0d1217f24b45-kube-api-access-p959t\") pod \"glance-operator-controller-manager-8886f4c47-q5lbf\" (UID: \"20f771bf-d003-48b0-8e50-0d1217f24b45\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.239458 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtj6k\" (UniqueName: \"kubernetes.io/projected/0562e590-1a66-4fbc-862d-833bc1600eac-kube-api-access-wtj6k\") pod \"cinder-operator-controller-manager-8d874c8fc-729mv\" (UID: \"0562e590-1a66-4fbc-862d-833bc1600eac\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.239492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfsj\" (UniqueName: \"kubernetes.io/projected/0c1486a5-ee95-4cde-9631-3c7c7aa31ae7-kube-api-access-jnfsj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8zlv5\" (UID: \"0c1486a5-ee95-4cde-9631-3c7c7aa31ae7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.250977 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.252227 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.257440 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jl5tl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.260645 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-hktcl"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.270084 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.280434 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kqlr4" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.280638 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.286003 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.294028 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.295448 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.298850 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4ggtg" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.317731 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-hktcl"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.337495 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.338820 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.340427 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dc6\" (UniqueName: \"kubernetes.io/projected/e5a04e0d-8a73-4f21-a61d-374d7a5784fb-kube-api-access-k9dc6\") pod \"heat-operator-controller-manager-69d6db494d-sq5w5\" (UID: \"e5a04e0d-8a73-4f21-a61d-374d7a5784fb\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.340505 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtj6k\" (UniqueName: \"kubernetes.io/projected/0562e590-1a66-4fbc-862d-833bc1600eac-kube-api-access-wtj6k\") pod \"cinder-operator-controller-manager-8d874c8fc-729mv\" (UID: \"0562e590-1a66-4fbc-862d-833bc1600eac\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.340540 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48twm\" (UniqueName: \"kubernetes.io/projected/23be57b1-6b3e-4346-93f9-2c45b0562d2b-kube-api-access-48twm\") pod \"designate-operator-controller-manager-6d9697b7f4-s75st\" (UID: \"23be57b1-6b3e-4346-93f9-2c45b0562d2b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.340576 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnfsj\" (UniqueName: \"kubernetes.io/projected/0c1486a5-ee95-4cde-9631-3c7c7aa31ae7-kube-api-access-jnfsj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8zlv5\" (UID: \"0c1486a5-ee95-4cde-9631-3c7c7aa31ae7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.341240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p959t\" (UniqueName: \"kubernetes.io/projected/20f771bf-d003-48b0-8e50-0d1217f24b45-kube-api-access-p959t\") pod \"glance-operator-controller-manager-8886f4c47-q5lbf\" (UID: \"20f771bf-d003-48b0-8e50-0d1217f24b45\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.346741 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ln2lt" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.378500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.378530 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p959t\" (UniqueName: \"kubernetes.io/projected/20f771bf-d003-48b0-8e50-0d1217f24b45-kube-api-access-p959t\") pod \"glance-operator-controller-manager-8886f4c47-q5lbf\" (UID: \"20f771bf-d003-48b0-8e50-0d1217f24b45\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.381742 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtj6k\" (UniqueName: \"kubernetes.io/projected/0562e590-1a66-4fbc-862d-833bc1600eac-kube-api-access-wtj6k\") pod \"cinder-operator-controller-manager-8d874c8fc-729mv\" (UID: \"0562e590-1a66-4fbc-862d-833bc1600eac\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.383701 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.384983 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.394019 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.395354 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.398990 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-t4sx5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.399649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnfsj\" (UniqueName: \"kubernetes.io/projected/0c1486a5-ee95-4cde-9631-3c7c7aa31ae7-kube-api-access-jnfsj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-8zlv5\" (UID: \"0c1486a5-ee95-4cde-9631-3c7c7aa31ae7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.407457 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.470922 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srt9d\" (UniqueName: \"kubernetes.io/projected/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-kube-api-access-srt9d\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471261 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5s4n\" (UniqueName: \"kubernetes.io/projected/8a86dacc-de73-4b52-994c-3b089ee427cc-kube-api-access-h5s4n\") pod \"horizon-operator-controller-manager-5fb775575f-x6p4t\" (UID: \"8a86dacc-de73-4b52-994c-3b089ee427cc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471329 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471358 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9dc6\" (UniqueName: \"kubernetes.io/projected/e5a04e0d-8a73-4f21-a61d-374d7a5784fb-kube-api-access-k9dc6\") pod \"heat-operator-controller-manager-69d6db494d-sq5w5\" (UID: \"e5a04e0d-8a73-4f21-a61d-374d7a5784fb\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471387 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9qz2\" (UniqueName: \"kubernetes.io/projected/b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa-kube-api-access-d9qz2\") pod \"keystone-operator-controller-manager-84f48565d4-5vbh8\" (UID: \"b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471512 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48twm\" (UniqueName: \"kubernetes.io/projected/23be57b1-6b3e-4346-93f9-2c45b0562d2b-kube-api-access-48twm\") pod \"designate-operator-controller-manager-6d9697b7f4-s75st\" (UID: \"23be57b1-6b3e-4346-93f9-2c45b0562d2b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.471604 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmbn\" (UniqueName: \"kubernetes.io/projected/a13f2341-6b53-4a7b-b67a-4a1d1846805d-kube-api-access-zsmbn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qqpfm\" (UID: \"a13f2341-6b53-4a7b-b67a-4a1d1846805d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.540038 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.541598 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.541617 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.544145 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.544840 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.544887 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.544985 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.552312 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.554878 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.556835 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fgcn6" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.556906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9dc6\" (UniqueName: \"kubernetes.io/projected/e5a04e0d-8a73-4f21-a61d-374d7a5784fb-kube-api-access-k9dc6\") pod \"heat-operator-controller-manager-69d6db494d-sq5w5\" (UID: \"e5a04e0d-8a73-4f21-a61d-374d7a5784fb\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.557276 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kvpsj" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.557408 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w6nvq" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.569942 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48twm\" (UniqueName: \"kubernetes.io/projected/23be57b1-6b3e-4346-93f9-2c45b0562d2b-kube-api-access-48twm\") pod \"designate-operator-controller-manager-6d9697b7f4-s75st\" (UID: \"23be57b1-6b3e-4346-93f9-2c45b0562d2b\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.574540 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmbn\" (UniqueName: \"kubernetes.io/projected/a13f2341-6b53-4a7b-b67a-4a1d1846805d-kube-api-access-zsmbn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qqpfm\" (UID: \"a13f2341-6b53-4a7b-b67a-4a1d1846805d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.574974 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqstl\" (UniqueName: \"kubernetes.io/projected/39686eda-a258-408b-bf9c-7ff7d515ed9d-kube-api-access-rqstl\") pod \"manila-operator-controller-manager-7dd968899f-5x28t\" (UID: \"39686eda-a258-408b-bf9c-7ff7d515ed9d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.575009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srt9d\" (UniqueName: \"kubernetes.io/projected/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-kube-api-access-srt9d\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.575037 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5s4n\" (UniqueName: \"kubernetes.io/projected/8a86dacc-de73-4b52-994c-3b089ee427cc-kube-api-access-h5s4n\") pod \"horizon-operator-controller-manager-5fb775575f-x6p4t\" (UID: \"8a86dacc-de73-4b52-994c-3b089ee427cc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.577128 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-6z258"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.578476 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.584047 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5fjnh" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.587293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.587360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9qz2\" (UniqueName: \"kubernetes.io/projected/b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa-kube-api-access-d9qz2\") pod \"keystone-operator-controller-manager-84f48565d4-5vbh8\" (UID: \"b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:20:14 crc kubenswrapper[4721]: E0202 13:20:14.589015 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:14 crc kubenswrapper[4721]: E0202 13:20:14.589176 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:15.089157231 +0000 UTC m=+1155.391671620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.616174 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.622737 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmbn\" (UniqueName: \"kubernetes.io/projected/a13f2341-6b53-4a7b-b67a-4a1d1846805d-kube-api-access-zsmbn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qqpfm\" (UID: \"a13f2341-6b53-4a7b-b67a-4a1d1846805d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.629435 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srt9d\" (UniqueName: \"kubernetes.io/projected/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-kube-api-access-srt9d\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.634994 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.652790 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5s4n\" (UniqueName: \"kubernetes.io/projected/8a86dacc-de73-4b52-994c-3b089ee427cc-kube-api-access-h5s4n\") pod \"horizon-operator-controller-manager-5fb775575f-x6p4t\" (UID: \"8a86dacc-de73-4b52-994c-3b089ee427cc\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.663142 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-6z258"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.664792 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9qz2\" (UniqueName: \"kubernetes.io/projected/b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa-kube-api-access-d9qz2\") pod \"keystone-operator-controller-manager-84f48565d4-5vbh8\" (UID: \"b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.668106 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.678294 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.678713 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.682664 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-swlxq" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.682960 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.695021 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8sw\" (UniqueName: \"kubernetes.io/projected/dc736681-960e-4f76-bc10-25f529da020a-kube-api-access-dl8sw\") pod \"octavia-operator-controller-manager-6687f8d877-42qq8\" (UID: \"dc736681-960e-4f76-bc10-25f529da020a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.695146 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662gz\" (UniqueName: \"kubernetes.io/projected/2d60d537-ea47-42fa-94c3-61704aef0678-kube-api-access-662gz\") pod \"neutron-operator-controller-manager-585dbc889-sjvjw\" (UID: \"2d60d537-ea47-42fa-94c3-61704aef0678\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.695194 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqstl\" (UniqueName: \"kubernetes.io/projected/39686eda-a258-408b-bf9c-7ff7d515ed9d-kube-api-access-rqstl\") pod \"manila-operator-controller-manager-7dd968899f-5x28t\" (UID: \"39686eda-a258-408b-bf9c-7ff7d515ed9d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.695251 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fnk4\" (UniqueName: \"kubernetes.io/projected/1f3087b4-acf0-4a27-9696-bdfb4728e96c-kube-api-access-8fnk4\") pod \"nova-operator-controller-manager-55bff696bd-6z258\" (UID: \"1f3087b4-acf0-4a27-9696-bdfb4728e96c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.695323 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snq4r\" (UniqueName: \"kubernetes.io/projected/1c4864d3-2fdd-4b98-ac89-aefb49b56187-kube-api-access-snq4r\") pod \"mariadb-operator-controller-manager-67bf948998-ct6hc\" (UID: \"1c4864d3-2fdd-4b98-ac89-aefb49b56187\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.701963 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.703528 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.706876 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bx7js" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.730743 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.732607 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.735550 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-bczqd" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.743715 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqstl\" (UniqueName: \"kubernetes.io/projected/39686eda-a258-408b-bf9c-7ff7d515ed9d-kube-api-access-rqstl\") pod \"manila-operator-controller-manager-7dd968899f-5x28t\" (UID: \"39686eda-a258-408b-bf9c-7ff7d515ed9d\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.753372 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.778877 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fnk4\" (UniqueName: \"kubernetes.io/projected/1f3087b4-acf0-4a27-9696-bdfb4728e96c-kube-api-access-8fnk4\") pod \"nova-operator-controller-manager-55bff696bd-6z258\" (UID: \"1f3087b4-acf0-4a27-9696-bdfb4728e96c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798377 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbfv\" (UniqueName: \"kubernetes.io/projected/ae636942-3520-410e-b70a-b4fc19a527ca-kube-api-access-chbfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798408 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snq4r\" (UniqueName: \"kubernetes.io/projected/1c4864d3-2fdd-4b98-ac89-aefb49b56187-kube-api-access-snq4r\") pod \"mariadb-operator-controller-manager-67bf948998-ct6hc\" (UID: \"1c4864d3-2fdd-4b98-ac89-aefb49b56187\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798643 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8sw\" (UniqueName: \"kubernetes.io/projected/dc736681-960e-4f76-bc10-25f529da020a-kube-api-access-dl8sw\") pod \"octavia-operator-controller-manager-6687f8d877-42qq8\" (UID: \"dc736681-960e-4f76-bc10-25f529da020a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798746 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.798888 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662gz\" (UniqueName: \"kubernetes.io/projected/2d60d537-ea47-42fa-94c3-61704aef0678-kube-api-access-662gz\") pod \"neutron-operator-controller-manager-585dbc889-sjvjw\" (UID: \"2d60d537-ea47-42fa-94c3-61704aef0678\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.819638 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.824561 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8sw\" (UniqueName: \"kubernetes.io/projected/dc736681-960e-4f76-bc10-25f529da020a-kube-api-access-dl8sw\") pod \"octavia-operator-controller-manager-6687f8d877-42qq8\" (UID: \"dc736681-960e-4f76-bc10-25f529da020a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.838658 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fnk4\" (UniqueName: \"kubernetes.io/projected/1f3087b4-acf0-4a27-9696-bdfb4728e96c-kube-api-access-8fnk4\") pod \"nova-operator-controller-manager-55bff696bd-6z258\" (UID: \"1f3087b4-acf0-4a27-9696-bdfb4728e96c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.841461 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662gz\" (UniqueName: \"kubernetes.io/projected/2d60d537-ea47-42fa-94c3-61704aef0678-kube-api-access-662gz\") pod \"neutron-operator-controller-manager-585dbc889-sjvjw\" (UID: \"2d60d537-ea47-42fa-94c3-61704aef0678\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.850662 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.859641 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snq4r\" (UniqueName: \"kubernetes.io/projected/1c4864d3-2fdd-4b98-ac89-aefb49b56187-kube-api-access-snq4r\") pod \"mariadb-operator-controller-manager-67bf948998-ct6hc\" (UID: \"1c4864d3-2fdd-4b98-ac89-aefb49b56187\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.876404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.889365 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.912976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chbfv\" (UniqueName: \"kubernetes.io/projected/ae636942-3520-410e-b70a-b4fc19a527ca-kube-api-access-chbfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.913086 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dqmh\" (UniqueName: \"kubernetes.io/projected/ed67384c-22d3-4466-8990-744b122efbf4-kube-api-access-6dqmh\") pod \"placement-operator-controller-manager-5b964cf4cd-kqdjm\" (UID: \"ed67384c-22d3-4466-8990-744b122efbf4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.913127 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.913193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svrc\" (UniqueName: \"kubernetes.io/projected/6b33adce-a49a-4ce2-af29-412661aaf062-kube-api-access-2svrc\") pod \"ovn-operator-controller-manager-788c46999f-rzjts\" (UID: \"6b33adce-a49a-4ce2-af29-412661aaf062\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:14 crc kubenswrapper[4721]: E0202 13:20:14.913572 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:14 crc kubenswrapper[4721]: E0202 13:20:14.913613 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:15.413597029 +0000 UTC m=+1155.716111418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.927683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.945866 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chbfv\" (UniqueName: \"kubernetes.io/projected/ae636942-3520-410e-b70a-b4fc19a527ca-kube-api-access-chbfv\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.955193 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.956605 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.961144 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zxjhw" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.962429 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv"] Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.964034 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:20:14 crc kubenswrapper[4721]: I0202 13:20:14.983786 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.001924 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.007924 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.009252 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.012582 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vk4zk" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.014604 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dqmh\" (UniqueName: \"kubernetes.io/projected/ed67384c-22d3-4466-8990-744b122efbf4-kube-api-access-6dqmh\") pod \"placement-operator-controller-manager-5b964cf4cd-kqdjm\" (UID: \"ed67384c-22d3-4466-8990-744b122efbf4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.014724 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svrc\" (UniqueName: \"kubernetes.io/projected/6b33adce-a49a-4ce2-af29-412661aaf062-kube-api-access-2svrc\") pod \"ovn-operator-controller-manager-788c46999f-rzjts\" (UID: \"6b33adce-a49a-4ce2-af29-412661aaf062\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.036576 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.037616 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svrc\" (UniqueName: \"kubernetes.io/projected/6b33adce-a49a-4ce2-af29-412661aaf062-kube-api-access-2svrc\") pod \"ovn-operator-controller-manager-788c46999f-rzjts\" (UID: \"6b33adce-a49a-4ce2-af29-412661aaf062\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.037778 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.043378 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dqmh\" (UniqueName: \"kubernetes.io/projected/ed67384c-22d3-4466-8990-744b122efbf4-kube-api-access-6dqmh\") pod \"placement-operator-controller-manager-5b964cf4cd-kqdjm\" (UID: \"ed67384c-22d3-4466-8990-744b122efbf4\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.055966 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.057196 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.058610 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.061253 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hdr94" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.065681 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.099186 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4pk6v"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.100292 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.104282 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8sckr" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.113541 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.114275 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4pk6v"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.115729 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5rv\" (UniqueName: \"kubernetes.io/projected/79e5221b-04ee-496d-82b7-16af5b340595-kube-api-access-px5rv\") pod \"telemetry-operator-controller-manager-5b9ffd7d65-rgkhb\" (UID: \"79e5221b-04ee-496d-82b7-16af5b340595\") " pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.115819 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg56x\" (UniqueName: \"kubernetes.io/projected/499ca4ef-3867-407b-ab4a-64fff307e296-kube-api-access-tg56x\") pod \"swift-operator-controller-manager-68fc8c869-79zrv\" (UID: \"499ca4ef-3867-407b-ab4a-64fff307e296\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.115857 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.115953 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.116010 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:16.115995633 +0000 UTC m=+1156.418510022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.131415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.228339 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w294l\" (UniqueName: \"kubernetes.io/projected/60ff9309-fd37-4618-b4f0-38704a558ec0-kube-api-access-w294l\") pod \"test-operator-controller-manager-56f8bfcd9f-2828d\" (UID: \"60ff9309-fd37-4618-b4f0-38704a558ec0\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.228429 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg56x\" (UniqueName: \"kubernetes.io/projected/499ca4ef-3867-407b-ab4a-64fff307e296-kube-api-access-tg56x\") pod \"swift-operator-controller-manager-68fc8c869-79zrv\" (UID: \"499ca4ef-3867-407b-ab4a-64fff307e296\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.228528 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbbc\" (UniqueName: \"kubernetes.io/projected/058f996d-8009-4f83-864d-177f7b577cf0-kube-api-access-hrbbc\") pod \"watcher-operator-controller-manager-564965969-4pk6v\" (UID: \"058f996d-8009-4f83-864d-177f7b577cf0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.228664 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5rv\" (UniqueName: \"kubernetes.io/projected/79e5221b-04ee-496d-82b7-16af5b340595-kube-api-access-px5rv\") pod \"telemetry-operator-controller-manager-5b9ffd7d65-rgkhb\" (UID: \"79e5221b-04ee-496d-82b7-16af5b340595\") " pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.256110 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5rv\" (UniqueName: \"kubernetes.io/projected/79e5221b-04ee-496d-82b7-16af5b340595-kube-api-access-px5rv\") pod \"telemetry-operator-controller-manager-5b9ffd7d65-rgkhb\" (UID: \"79e5221b-04ee-496d-82b7-16af5b340595\") " pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.293437 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.294559 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.324148 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.324402 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-njf7c" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.332857 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.339534 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.346695 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg56x\" (UniqueName: \"kubernetes.io/projected/499ca4ef-3867-407b-ab4a-64fff307e296-kube-api-access-tg56x\") pod \"swift-operator-controller-manager-68fc8c869-79zrv\" (UID: \"499ca4ef-3867-407b-ab4a-64fff307e296\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.348511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbbc\" (UniqueName: \"kubernetes.io/projected/058f996d-8009-4f83-864d-177f7b577cf0-kube-api-access-hrbbc\") pod \"watcher-operator-controller-manager-564965969-4pk6v\" (UID: \"058f996d-8009-4f83-864d-177f7b577cf0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.370763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w294l\" (UniqueName: \"kubernetes.io/projected/60ff9309-fd37-4618-b4f0-38704a558ec0-kube-api-access-w294l\") pod \"test-operator-controller-manager-56f8bfcd9f-2828d\" (UID: \"60ff9309-fd37-4618-b4f0-38704a558ec0\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.355845 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.410473 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w294l\" (UniqueName: \"kubernetes.io/projected/60ff9309-fd37-4618-b4f0-38704a558ec0-kube-api-access-w294l\") pod \"test-operator-controller-manager-56f8bfcd9f-2828d\" (UID: \"60ff9309-fd37-4618-b4f0-38704a558ec0\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.445274 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" event={"ID":"0562e590-1a66-4fbc-862d-833bc1600eac","Type":"ContainerStarted","Data":"802ab0de64a113538a1aec3f496bfaf10c4f0a0553788be15606e0eb9778f801"} Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.455040 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbbc\" (UniqueName: \"kubernetes.io/projected/058f996d-8009-4f83-864d-177f7b577cf0-kube-api-access-hrbbc\") pod \"watcher-operator-controller-manager-564965969-4pk6v\" (UID: \"058f996d-8009-4f83-864d-177f7b577cf0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.465324 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.479270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.479480 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6c5\" (UniqueName: \"kubernetes.io/projected/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-kube-api-access-cw6c5\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.479545 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.479608 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.480970 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.481012 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:16.480998445 +0000 UTC m=+1156.783512834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.483032 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.484756 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.502482 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gtzwh" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.503806 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.581134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.583742 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.583877 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:16.083858282 +0000 UTC m=+1156.386372671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "metrics-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.588175 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6c5\" (UniqueName: \"kubernetes.io/projected/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-kube-api-access-cw6c5\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.588358 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.589375 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: E0202 13:20:15.589415 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:16.089400282 +0000 UTC m=+1156.391914671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.622792 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6c5\" (UniqueName: \"kubernetes.io/projected/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-kube-api-access-cw6c5\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.633474 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.692641 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hb97\" (UniqueName: \"kubernetes.io/projected/56b67b2b-b9fd-4353-88e3-d4f1d44653e2-kube-api-access-5hb97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xwkhz\" (UID: \"56b67b2b-b9fd-4353-88e3-d4f1d44653e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.693139 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.723655 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.798198 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hb97\" (UniqueName: \"kubernetes.io/projected/56b67b2b-b9fd-4353-88e3-d4f1d44653e2-kube-api-access-5hb97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xwkhz\" (UID: \"56b67b2b-b9fd-4353-88e3-d4f1d44653e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.798815 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.820665 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hb97\" (UniqueName: \"kubernetes.io/projected/56b67b2b-b9fd-4353-88e3-d4f1d44653e2-kube-api-access-5hb97\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xwkhz\" (UID: \"56b67b2b-b9fd-4353-88e3-d4f1d44653e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.921373 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm"] Feb 02 13:20:15 crc kubenswrapper[4721]: I0202 13:20:15.937173 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5"] Feb 02 13:20:15 crc kubenswrapper[4721]: W0202 13:20:15.996642 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda13f2341_6b53_4a7b_b67a_4a1d1846805d.slice/crio-81ebee5ebf47d1183088f5a241c61e8495e7bbfda07a21c496c5570593db4052 WatchSource:0}: Error finding container 81ebee5ebf47d1183088f5a241c61e8495e7bbfda07a21c496c5570593db4052: Status 404 returned error can't find the container with id 81ebee5ebf47d1183088f5a241c61e8495e7bbfda07a21c496c5570593db4052 Feb 02 13:20:15 crc kubenswrapper[4721]: W0202 13:20:15.999110 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c1486a5_ee95_4cde_9631_3c7c7aa31ae7.slice/crio-128c7313e2e60d8f33042b20b233aa59f4e6a19d54beffe21ff87896615e3018 WatchSource:0}: Error finding container 128c7313e2e60d8f33042b20b233aa59f4e6a19d54beffe21ff87896615e3018: Status 404 returned error can't find the container with id 128c7313e2e60d8f33042b20b233aa59f4e6a19d54beffe21ff87896615e3018 Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.072679 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.115480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.115919 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.116290 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.116356 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:17.116336276 +0000 UTC m=+1157.418850675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "metrics-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.116825 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.116904 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:17.116881812 +0000 UTC m=+1157.419396261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.217445 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.217848 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.217905 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:18.217887198 +0000 UTC m=+1158.520401597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.442100 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st"] Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.462635 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" event={"ID":"a13f2341-6b53-4a7b-b67a-4a1d1846805d","Type":"ContainerStarted","Data":"81ebee5ebf47d1183088f5a241c61e8495e7bbfda07a21c496c5570593db4052"} Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.465311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" event={"ID":"20f771bf-d003-48b0-8e50-0d1217f24b45","Type":"ContainerStarted","Data":"3c34b8a7ad1f0e42c2b1a6bcea58158d7345e04ba4caaf5bedeae2a64c01764e"} Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.471981 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" event={"ID":"0c1486a5-ee95-4cde-9631-3c7c7aa31ae7","Type":"ContainerStarted","Data":"128c7313e2e60d8f33042b20b233aa59f4e6a19d54beffe21ff87896615e3018"} Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.477308 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" event={"ID":"23be57b1-6b3e-4346-93f9-2c45b0562d2b","Type":"ContainerStarted","Data":"427142c2cd53668b37f223b38bb98c04a1e3ac386aa42354cf12f771c1bfff47"} Feb 02 13:20:16 crc kubenswrapper[4721]: I0202 13:20:16.528031 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.528274 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:16 crc kubenswrapper[4721]: E0202 13:20:16.528468 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:18.528315877 +0000 UTC m=+1158.830830266 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.145212 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.145754 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:17 crc kubenswrapper[4721]: E0202 13:20:17.145655 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:17 crc kubenswrapper[4721]: E0202 13:20:17.146020 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:19.146003022 +0000 UTC m=+1159.448517411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:17 crc kubenswrapper[4721]: E0202 13:20:17.145966 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:20:17 crc kubenswrapper[4721]: E0202 13:20:17.146396 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:19.146367522 +0000 UTC m=+1159.448881901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "metrics-server-cert" not found Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.523169 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.539681 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.554838 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw"] Feb 02 13:20:17 crc kubenswrapper[4721]: W0202 13:20:17.555812 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39686eda_a258_408b_bf9c_7ff7d515ed9d.slice/crio-e8dde490ec5376d7612923a9e2bfd3282bf5e1664bd29b15db28d5ef64d20142 WatchSource:0}: Error finding container e8dde490ec5376d7612923a9e2bfd3282bf5e1664bd29b15db28d5ef64d20142: Status 404 returned error can't find the container with id e8dde490ec5376d7612923a9e2bfd3282bf5e1664bd29b15db28d5ef64d20142 Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.580427 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4pk6v"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.629801 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.686445 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.714882 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.743839 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.758310 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.776392 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-6z258"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.788175 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc"] Feb 02 13:20:17 crc kubenswrapper[4721]: I0202 13:20:17.801218 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d"] Feb 02 13:20:17 crc kubenswrapper[4721]: W0202 13:20:17.946219 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c4864d3_2fdd_4b98_ac89_aefb49b56187.slice/crio-faa5a566441e6886f18327166fefb1254f417081b69cbf6d4b69c62f40c66281 WatchSource:0}: Error finding container faa5a566441e6886f18327166fefb1254f417081b69cbf6d4b69c62f40c66281: Status 404 returned error can't find the container with id faa5a566441e6886f18327166fefb1254f417081b69cbf6d4b69c62f40c66281 Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.055022 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm"] Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.075453 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv"] Feb 02 13:20:18 crc kubenswrapper[4721]: W0202 13:20:18.109197 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499ca4ef_3867_407b_ab4a_64fff307e296.slice/crio-f70c073e7f283813d0de1a9ae64f36de9305b2d8727e696f1cdfe5b7d76e2f86 WatchSource:0}: Error finding container f70c073e7f283813d0de1a9ae64f36de9305b2d8727e696f1cdfe5b7d76e2f86: Status 404 returned error can't find the container with id f70c073e7f283813d0de1a9ae64f36de9305b2d8727e696f1cdfe5b7d76e2f86 Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.117446 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz"] Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.133350 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dqmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-kqdjm_openstack-operators(ed67384c-22d3-4466-8990-744b122efbf4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.133376 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tg56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-79zrv_openstack-operators(499ca4ef-3867-407b-ab4a-64fff307e296): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.135214 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" podUID="499ca4ef-3867-407b-ab4a-64fff307e296" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.135299 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" podUID="ed67384c-22d3-4466-8990-744b122efbf4" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.156330 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hb97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xwkhz_openstack-operators(56b67b2b-b9fd-4353-88e3-d4f1d44653e2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.158000 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" podUID="56b67b2b-b9fd-4353-88e3-d4f1d44653e2" Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.289751 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.289907 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.289952 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:22.289938682 +0000 UTC m=+1162.592453071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.552530 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" event={"ID":"60ff9309-fd37-4618-b4f0-38704a558ec0","Type":"ContainerStarted","Data":"993df692252a4ead9c18075ce5c8b9773f466e31a544109436477462dc5cee88"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.554468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" event={"ID":"6b33adce-a49a-4ce2-af29-412661aaf062","Type":"ContainerStarted","Data":"74eaba4e0d9d2120b42db269dfff77ce3f4a8765d56c7b00681e8546918b6163"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.577449 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" event={"ID":"39686eda-a258-408b-bf9c-7ff7d515ed9d","Type":"ContainerStarted","Data":"e8dde490ec5376d7612923a9e2bfd3282bf5e1664bd29b15db28d5ef64d20142"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.580478 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" event={"ID":"e5a04e0d-8a73-4f21-a61d-374d7a5784fb","Type":"ContainerStarted","Data":"7c365d4bc9b09ec8665498b5848df8310ec99e5b9dd74ac183527e17c8233b3b"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.583997 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" event={"ID":"499ca4ef-3867-407b-ab4a-64fff307e296","Type":"ContainerStarted","Data":"f70c073e7f283813d0de1a9ae64f36de9305b2d8727e696f1cdfe5b7d76e2f86"} Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.587243 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" podUID="499ca4ef-3867-407b-ab4a-64fff307e296" Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.588111 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" event={"ID":"56b67b2b-b9fd-4353-88e3-d4f1d44653e2","Type":"ContainerStarted","Data":"5b53dd51eed45826864bf9b471fc9ecb9f967352eb8091998745e303c7f858a8"} Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.590454 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" podUID="56b67b2b-b9fd-4353-88e3-d4f1d44653e2" Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.591712 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" event={"ID":"dc736681-960e-4f76-bc10-25f529da020a","Type":"ContainerStarted","Data":"24d34dd8e5d15be47a504e2775a2ba6965c6f08fdf2ca9c5884758b3a3d540c4"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.593289 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" event={"ID":"058f996d-8009-4f83-864d-177f7b577cf0","Type":"ContainerStarted","Data":"0983ba074312bb2dd88f71b118cf42ed8067e78c7f205037c928c167dbb665aa"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.597902 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" event={"ID":"b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa","Type":"ContainerStarted","Data":"471947c38644f35ad2bc7655ff0343d21222d75dcd4f049db3926a1c3440b6fe"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.600286 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" event={"ID":"8a86dacc-de73-4b52-994c-3b089ee427cc","Type":"ContainerStarted","Data":"e439600c106ad9bbc89a92a91898ba0e11c3de77adca54f6e00afd1e4adc73f8"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.602760 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" event={"ID":"79e5221b-04ee-496d-82b7-16af5b340595","Type":"ContainerStarted","Data":"7a1a23906b68b2275b2b93abc3243a639322b6f3cac84fea671ba99d73325348"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.604463 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.604645 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.604708 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:22.604688638 +0000 UTC m=+1162.907203027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.609362 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" event={"ID":"ed67384c-22d3-4466-8990-744b122efbf4","Type":"ContainerStarted","Data":"6ea4d412214ebd4cf5a5edc453c2a6f8e11bf53bf63bc2293ab350ba29e245ab"} Feb 02 13:20:18 crc kubenswrapper[4721]: E0202 13:20:18.609885 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" podUID="ed67384c-22d3-4466-8990-744b122efbf4" Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.611705 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" event={"ID":"1c4864d3-2fdd-4b98-ac89-aefb49b56187","Type":"ContainerStarted","Data":"faa5a566441e6886f18327166fefb1254f417081b69cbf6d4b69c62f40c66281"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.613616 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" event={"ID":"2d60d537-ea47-42fa-94c3-61704aef0678","Type":"ContainerStarted","Data":"2b420fd5ec1c7a11931f50ccd0e0b5f7e9854fbffa2f1cd0f377125a53ff6b3d"} Feb 02 13:20:18 crc kubenswrapper[4721]: I0202 13:20:18.634721 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" event={"ID":"1f3087b4-acf0-4a27-9696-bdfb4728e96c","Type":"ContainerStarted","Data":"17f2e2e8004c70072aacb28682f3183d6d7f698c0b3aabebabe8de85b22a96f3"} Feb 02 13:20:19 crc kubenswrapper[4721]: I0202 13:20:19.219095 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:19 crc kubenswrapper[4721]: I0202 13:20:19.219214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.219252 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.219317 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:23.219298809 +0000 UTC m=+1163.521813198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.219378 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.219418 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:23.219407172 +0000 UTC m=+1163.521921561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "metrics-server-cert" not found Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.657877 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" podUID="499ca4ef-3867-407b-ab4a-64fff307e296" Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.657886 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" podUID="56b67b2b-b9fd-4353-88e3-d4f1d44653e2" Feb 02 13:20:19 crc kubenswrapper[4721]: E0202 13:20:19.677930 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" podUID="ed67384c-22d3-4466-8990-744b122efbf4" Feb 02 13:20:22 crc kubenswrapper[4721]: I0202 13:20:22.301254 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:22 crc kubenswrapper[4721]: E0202 13:20:22.301406 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:22 crc kubenswrapper[4721]: E0202 13:20:22.301975 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:30.301952625 +0000 UTC m=+1170.604467014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:22 crc kubenswrapper[4721]: I0202 13:20:22.605901 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:22 crc kubenswrapper[4721]: E0202 13:20:22.606338 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:22 crc kubenswrapper[4721]: E0202 13:20:22.606646 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:30.606623819 +0000 UTC m=+1170.909138208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:23 crc kubenswrapper[4721]: I0202 13:20:23.317224 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:23 crc kubenswrapper[4721]: I0202 13:20:23.317286 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:23 crc kubenswrapper[4721]: E0202 13:20:23.317359 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:23 crc kubenswrapper[4721]: E0202 13:20:23.317393 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 13:20:23 crc kubenswrapper[4721]: E0202 13:20:23.317428 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:31.317414466 +0000 UTC m=+1171.619928855 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "metrics-server-cert" not found Feb 02 13:20:23 crc kubenswrapper[4721]: E0202 13:20:23.317440 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:31.317435157 +0000 UTC m=+1171.619949546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:30 crc kubenswrapper[4721]: I0202 13:20:30.376917 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.377608 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.377658 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert podName:9d11c3e4-10b4-4ff4-aaa2-04e342d984b4 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:46.377643824 +0000 UTC m=+1186.680158213 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert") pod "infra-operator-controller-manager-79955696d6-hktcl" (UID: "9d11c3e4-10b4-4ff4-aaa2-04e342d984b4") : secret "infra-operator-webhook-server-cert" not found Feb 02 13:20:30 crc kubenswrapper[4721]: I0202 13:20:30.682524 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.682718 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.683013 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert podName:ae636942-3520-410e-b70a-b4fc19a527ca nodeName:}" failed. No retries permitted until 2026-02-02 13:20:46.682968697 +0000 UTC m=+1186.985483076 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" (UID: "ae636942-3520-410e-b70a-b4fc19a527ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.887419 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.887677 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsmbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-qqpfm_openstack-operators(a13f2341-6b53-4a7b-b67a-4a1d1846805d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:30 crc kubenswrapper[4721]: E0202 13:20:30.888893 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" podUID="a13f2341-6b53-4a7b-b67a-4a1d1846805d" Feb 02 13:20:31 crc kubenswrapper[4721]: I0202 13:20:31.395596 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:31 crc kubenswrapper[4721]: I0202 13:20:31.395697 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.396881 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.396958 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs podName:55bc1d80-1d29-4e15-baca-49eee6fd3aa5 nodeName:}" failed. No retries permitted until 2026-02-02 13:20:47.396936 +0000 UTC m=+1187.699450469 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs") pod "openstack-operator-controller-manager-5d6c59fb84-5s25p" (UID: "55bc1d80-1d29-4e15-baca-49eee6fd3aa5") : secret "webhook-server-cert" not found Feb 02 13:20:31 crc kubenswrapper[4721]: I0202 13:20:31.402498 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-metrics-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.762708 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" podUID="a13f2341-6b53-4a7b-b67a-4a1d1846805d" Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.813172 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.813377 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p959t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-q5lbf_openstack-operators(20f771bf-d003-48b0-8e50-0d1217f24b45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:31 crc kubenswrapper[4721]: E0202 13:20:31.814517 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" podUID="20f771bf-d003-48b0-8e50-0d1217f24b45" Feb 02 13:20:32 crc kubenswrapper[4721]: E0202 13:20:32.769970 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" podUID="20f771bf-d003-48b0-8e50-0d1217f24b45" Feb 02 13:20:33 crc kubenswrapper[4721]: E0202 13:20:33.742040 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 02 13:20:33 crc kubenswrapper[4721]: E0202 13:20:33.742319 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-662gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-sjvjw_openstack-operators(2d60d537-ea47-42fa-94c3-61704aef0678): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:33 crc kubenswrapper[4721]: E0202 13:20:33.743698 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" podUID="2d60d537-ea47-42fa-94c3-61704aef0678" Feb 02 13:20:33 crc kubenswrapper[4721]: E0202 13:20:33.776205 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" podUID="2d60d537-ea47-42fa-94c3-61704aef0678" Feb 02 13:20:34 crc kubenswrapper[4721]: E0202 13:20:34.439298 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Feb 02 13:20:34 crc kubenswrapper[4721]: E0202 13:20:34.439771 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnfsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-8zlv5_openstack-operators(0c1486a5-ee95-4cde-9631-3c7c7aa31ae7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:34 crc kubenswrapper[4721]: E0202 13:20:34.440982 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" podUID="0c1486a5-ee95-4cde-9631-3c7c7aa31ae7" Feb 02 13:20:34 crc kubenswrapper[4721]: E0202 13:20:34.784863 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" podUID="0c1486a5-ee95-4cde-9631-3c7c7aa31ae7" Feb 02 13:20:36 crc kubenswrapper[4721]: E0202 13:20:36.597413 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Feb 02 13:20:36 crc kubenswrapper[4721]: E0202 13:20:36.597852 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrbbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-4pk6v_openstack-operators(058f996d-8009-4f83-864d-177f7b577cf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:36 crc kubenswrapper[4721]: E0202 13:20:36.599042 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" podUID="058f996d-8009-4f83-864d-177f7b577cf0" Feb 02 13:20:36 crc kubenswrapper[4721]: E0202 13:20:36.801961 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" podUID="058f996d-8009-4f83-864d-177f7b577cf0" Feb 02 13:20:37 crc kubenswrapper[4721]: E0202 13:20:37.306728 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 02 13:20:37 crc kubenswrapper[4721]: E0202 13:20:37.306915 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqstl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-5x28t_openstack-operators(39686eda-a258-408b-bf9c-7ff7d515ed9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:37 crc kubenswrapper[4721]: E0202 13:20:37.309531 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" podUID="39686eda-a258-408b-bf9c-7ff7d515ed9d" Feb 02 13:20:37 crc kubenswrapper[4721]: E0202 13:20:37.810057 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" podUID="39686eda-a258-408b-bf9c-7ff7d515ed9d" Feb 02 13:20:38 crc kubenswrapper[4721]: E0202 13:20:38.653114 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 02 13:20:38 crc kubenswrapper[4721]: E0202 13:20:38.653601 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snq4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-ct6hc_openstack-operators(1c4864d3-2fdd-4b98-ac89-aefb49b56187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:38 crc kubenswrapper[4721]: E0202 13:20:38.655303 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" podUID="1c4864d3-2fdd-4b98-ac89-aefb49b56187" Feb 02 13:20:38 crc kubenswrapper[4721]: E0202 13:20:38.817167 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" podUID="1c4864d3-2fdd-4b98-ac89-aefb49b56187" Feb 02 13:20:39 crc kubenswrapper[4721]: E0202 13:20:39.264890 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Feb 02 13:20:39 crc kubenswrapper[4721]: E0202 13:20:39.265054 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w294l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-2828d_openstack-operators(60ff9309-fd37-4618-b4f0-38704a558ec0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:39 crc kubenswrapper[4721]: E0202 13:20:39.266469 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" podUID="60ff9309-fd37-4618-b4f0-38704a558ec0" Feb 02 13:20:39 crc kubenswrapper[4721]: E0202 13:20:39.825542 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" podUID="60ff9309-fd37-4618-b4f0-38704a558ec0" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.137906 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.138916 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9qz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-5vbh8_openstack-operators(b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.140368 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" podUID="b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.457635 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.465462 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d11c3e4-10b4-4ff4-aaa2-04e342d984b4-cert\") pod \"infra-operator-controller-manager-79955696d6-hktcl\" (UID: \"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.706931 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.745214 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.745385 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8fnk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-6z258_openstack-operators(1f3087b4-acf0-4a27-9696-bdfb4728e96c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.746677 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" podUID="1f3087b4-acf0-4a27-9696-bdfb4728e96c" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.764204 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.769772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae636942-3520-410e-b70a-b4fc19a527ca-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw\" (UID: \"ae636942-3520-410e-b70a-b4fc19a527ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:46 crc kubenswrapper[4721]: I0202 13:20:46.875653 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.901513 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" podUID="1f3087b4-acf0-4a27-9696-bdfb4728e96c" Feb 02 13:20:46 crc kubenswrapper[4721]: E0202 13:20:46.901796 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" podUID="b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.476888 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.484505 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55bc1d80-1d29-4e15-baca-49eee6fd3aa5-webhook-certs\") pod \"openstack-operator-controller-manager-5d6c59fb84-5s25p\" (UID: \"55bc1d80-1d29-4e15-baca-49eee6fd3aa5\") " pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.557769 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.865889 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-hktcl"] Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.920055 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" event={"ID":"e5a04e0d-8a73-4f21-a61d-374d7a5784fb","Type":"ContainerStarted","Data":"c0d07b97a688293d76e08988273b5741d49efe11e45772f740883091f139ec0d"} Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.920203 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:47 crc kubenswrapper[4721]: W0202 13:20:47.933413 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d11c3e4_10b4_4ff4_aaa2_04e342d984b4.slice/crio-cf534ec8ff2c801fb79015aaf251334b66d35d2f6b81f8e3be6a7a9dcf1e5e1f WatchSource:0}: Error finding container cf534ec8ff2c801fb79015aaf251334b66d35d2f6b81f8e3be6a7a9dcf1e5e1f: Status 404 returned error can't find the container with id cf534ec8ff2c801fb79015aaf251334b66d35d2f6b81f8e3be6a7a9dcf1e5e1f Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.933808 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p"] Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.934294 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" event={"ID":"23be57b1-6b3e-4346-93f9-2c45b0562d2b","Type":"ContainerStarted","Data":"4fba65b61bf8ae32cd08011529ace66fa359e9d02d20f9024a34708b8a36fba7"} Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.934442 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.950173 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" podStartSLOduration=5.776487449 podStartE2EDuration="33.950155125s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.946291525 +0000 UTC m=+1158.248805914" lastFinishedPulling="2026-02-02 13:20:46.119959201 +0000 UTC m=+1186.422473590" observedRunningTime="2026-02-02 13:20:47.940962748 +0000 UTC m=+1188.243477197" watchObservedRunningTime="2026-02-02 13:20:47.950155125 +0000 UTC m=+1188.252669514" Feb 02 13:20:47 crc kubenswrapper[4721]: I0202 13:20:47.972553 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" podStartSLOduration=4.28802897 podStartE2EDuration="33.97253573s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:16.433014215 +0000 UTC m=+1156.735528604" lastFinishedPulling="2026-02-02 13:20:46.117520975 +0000 UTC m=+1186.420035364" observedRunningTime="2026-02-02 13:20:47.96922031 +0000 UTC m=+1188.271734699" watchObservedRunningTime="2026-02-02 13:20:47.97253573 +0000 UTC m=+1188.275050129" Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.003023 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" event={"ID":"6b33adce-a49a-4ce2-af29-412661aaf062","Type":"ContainerStarted","Data":"cb526328f69003fa1613ab54133e2b47c03d6a116bd0f2d0d9f76c110b41e91d"} Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.003110 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.007534 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw"] Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.017749 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" event={"ID":"79e5221b-04ee-496d-82b7-16af5b340595","Type":"ContainerStarted","Data":"4eb53c7c6627b8e1c53a26b3100872c36008080df6a06829dd6bae642daa0b7c"} Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.018675 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.023341 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" podStartSLOduration=5.497778497 podStartE2EDuration="34.023325091s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.592983658 +0000 UTC m=+1157.895498047" lastFinishedPulling="2026-02-02 13:20:46.118530252 +0000 UTC m=+1186.421044641" observedRunningTime="2026-02-02 13:20:48.02179544 +0000 UTC m=+1188.324309829" watchObservedRunningTime="2026-02-02 13:20:48.023325091 +0000 UTC m=+1188.325839470" Feb 02 13:20:48 crc kubenswrapper[4721]: I0202 13:20:48.070882 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" podStartSLOduration=4.903179337 podStartE2EDuration="34.070863715s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.557198552 +0000 UTC m=+1157.859712941" lastFinishedPulling="2026-02-02 13:20:46.72488293 +0000 UTC m=+1187.027397319" observedRunningTime="2026-02-02 13:20:48.046001383 +0000 UTC m=+1188.348515782" watchObservedRunningTime="2026-02-02 13:20:48.070863715 +0000 UTC m=+1188.373378104" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.026534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" event={"ID":"499ca4ef-3867-407b-ab4a-64fff307e296","Type":"ContainerStarted","Data":"f49643613b63bc62f45dc4e728757d15c987d21c43c18bfa6758c390fe1f39ce"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.027357 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.028123 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" event={"ID":"55bc1d80-1d29-4e15-baca-49eee6fd3aa5","Type":"ContainerStarted","Data":"c09e042f26a6c7cec14d57fb8281c8288c7994c6467860aedc668c96157eb6ff"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.028166 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" event={"ID":"55bc1d80-1d29-4e15-baca-49eee6fd3aa5","Type":"ContainerStarted","Data":"0be584c4dfc86e974e8c29adc0d7b52a3cb7525cb15ff3f593b07acafee1f276"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.028499 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.029231 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" event={"ID":"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4","Type":"ContainerStarted","Data":"cf534ec8ff2c801fb79015aaf251334b66d35d2f6b81f8e3be6a7a9dcf1e5e1f"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.030696 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" event={"ID":"2d60d537-ea47-42fa-94c3-61704aef0678","Type":"ContainerStarted","Data":"ea603c5fa5301bac67a200d83e1ea6c4bc07678d5b2cb44a5830807116b184e6"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.030921 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.032852 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" event={"ID":"8a86dacc-de73-4b52-994c-3b089ee427cc","Type":"ContainerStarted","Data":"647c2bb6d09289bca08a4175e39e81c1aee864e548128f2e678bd28e7d8c85b5"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.032878 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.034708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" event={"ID":"0562e590-1a66-4fbc-862d-833bc1600eac","Type":"ContainerStarted","Data":"387137b53d96a73fa558ca940bfa48d59d3708420598717f17167e4a673e7763"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.034841 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.036604 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" event={"ID":"56b67b2b-b9fd-4353-88e3-d4f1d44653e2","Type":"ContainerStarted","Data":"f9d62505f47aca83675b83ff091313c0aa58d9af4c14cc8c0f02b74cbf4f7e25"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.038265 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" event={"ID":"ae636942-3520-410e-b70a-b4fc19a527ca","Type":"ContainerStarted","Data":"e16577c4382a9568fff440f6e1920728aa5eece0832bdecd4ce93e31660a2801"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.039875 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" event={"ID":"20f771bf-d003-48b0-8e50-0d1217f24b45","Type":"ContainerStarted","Data":"2173a92b978455abce7aba1b85d479f3912404197a459a11a84a16ae19eb40ba"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.040102 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.041326 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" event={"ID":"a13f2341-6b53-4a7b-b67a-4a1d1846805d","Type":"ContainerStarted","Data":"873cb0f86ebab76c9c5fba2b5b0d2c4ba945e1bd27fca2383c85d94aa4f60695"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.041477 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.043549 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" event={"ID":"dc736681-960e-4f76-bc10-25f529da020a","Type":"ContainerStarted","Data":"5763d53b309ea1dd4d4e42205c5751ae55546b72a7bdfd71d412ae9c719a3eda"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.043732 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.044864 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" event={"ID":"058f996d-8009-4f83-864d-177f7b577cf0","Type":"ContainerStarted","Data":"937b1fdbc6508ad95d249a0e96910617ee8f21c89375457abc058c69d9b5af32"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.045025 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.047657 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" event={"ID":"ed67384c-22d3-4466-8990-744b122efbf4","Type":"ContainerStarted","Data":"f5a0aab20ab7f56fabfe6dd5a21a586053f50f3b8c5ed42c3fbe3bf38a7917b5"} Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.091209 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" podStartSLOduration=5.8596063560000005 podStartE2EDuration="35.091193298s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:18.133191841 +0000 UTC m=+1158.435706230" lastFinishedPulling="2026-02-02 13:20:47.364778763 +0000 UTC m=+1187.667293172" observedRunningTime="2026-02-02 13:20:49.087384976 +0000 UTC m=+1189.389899365" watchObservedRunningTime="2026-02-02 13:20:49.091193298 +0000 UTC m=+1189.393707687" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.134803 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" podStartSLOduration=6.794040959 podStartE2EDuration="35.134781254s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.778118586 +0000 UTC m=+1158.080632975" lastFinishedPulling="2026-02-02 13:20:46.118858881 +0000 UTC m=+1186.421373270" observedRunningTime="2026-02-02 13:20:49.117489918 +0000 UTC m=+1189.420004317" watchObservedRunningTime="2026-02-02 13:20:49.134781254 +0000 UTC m=+1189.437295643" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.194852 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" podStartSLOduration=4.815111438 podStartE2EDuration="35.194830115s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.562463084 +0000 UTC m=+1157.864977473" lastFinishedPulling="2026-02-02 13:20:47.942181771 +0000 UTC m=+1188.244696150" observedRunningTime="2026-02-02 13:20:49.161484545 +0000 UTC m=+1189.463998964" watchObservedRunningTime="2026-02-02 13:20:49.194830115 +0000 UTC m=+1189.497344514" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.195136 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" podStartSLOduration=3.808361114 podStartE2EDuration="35.195130544s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:16.02864277 +0000 UTC m=+1156.331157159" lastFinishedPulling="2026-02-02 13:20:47.41541218 +0000 UTC m=+1187.717926589" observedRunningTime="2026-02-02 13:20:49.194419435 +0000 UTC m=+1189.496933844" watchObservedRunningTime="2026-02-02 13:20:49.195130544 +0000 UTC m=+1189.497644933" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.269154 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" podStartSLOduration=6.038181175 podStartE2EDuration="35.269133511s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:18.13316392 +0000 UTC m=+1158.435678309" lastFinishedPulling="2026-02-02 13:20:47.364116256 +0000 UTC m=+1187.666630645" observedRunningTime="2026-02-02 13:20:49.23944458 +0000 UTC m=+1189.541958969" watchObservedRunningTime="2026-02-02 13:20:49.269133511 +0000 UTC m=+1189.571647920" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.279681 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" podStartSLOduration=6.733942886 podStartE2EDuration="35.279664986s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.572181156 +0000 UTC m=+1157.874695545" lastFinishedPulling="2026-02-02 13:20:46.117903256 +0000 UTC m=+1186.420417645" observedRunningTime="2026-02-02 13:20:49.270396135 +0000 UTC m=+1189.572910544" watchObservedRunningTime="2026-02-02 13:20:49.279664986 +0000 UTC m=+1189.582179395" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.346269 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" podStartSLOduration=4.976911317 podStartE2EDuration="35.346248323s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.571797266 +0000 UTC m=+1157.874311645" lastFinishedPulling="2026-02-02 13:20:47.941134262 +0000 UTC m=+1188.243648651" observedRunningTime="2026-02-02 13:20:49.315858813 +0000 UTC m=+1189.618373212" watchObservedRunningTime="2026-02-02 13:20:49.346248323 +0000 UTC m=+1189.648762732" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.351632 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xwkhz" podStartSLOduration=6.106222612 podStartE2EDuration="35.351617048s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:18.156184851 +0000 UTC m=+1158.458699240" lastFinishedPulling="2026-02-02 13:20:47.401579287 +0000 UTC m=+1187.704093676" observedRunningTime="2026-02-02 13:20:49.347438345 +0000 UTC m=+1189.649952744" watchObservedRunningTime="2026-02-02 13:20:49.351617048 +0000 UTC m=+1189.654131427" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.395003 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" podStartSLOduration=4.539814106 podStartE2EDuration="35.394978468s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:15.262454646 +0000 UTC m=+1155.564969035" lastFinishedPulling="2026-02-02 13:20:46.117619008 +0000 UTC m=+1186.420133397" observedRunningTime="2026-02-02 13:20:49.376598102 +0000 UTC m=+1189.679112491" watchObservedRunningTime="2026-02-02 13:20:49.394978468 +0000 UTC m=+1189.697492867" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.482774 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" podStartSLOduration=35.482748548 podStartE2EDuration="35.482748548s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:20:49.482570843 +0000 UTC m=+1189.785085232" watchObservedRunningTime="2026-02-02 13:20:49.482748548 +0000 UTC m=+1189.785262937" Feb 02 13:20:49 crc kubenswrapper[4721]: I0202 13:20:49.483016 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" podStartSLOduration=3.654937091 podStartE2EDuration="35.483007285s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:15.573519784 +0000 UTC m=+1155.876034183" lastFinishedPulling="2026-02-02 13:20:47.401589968 +0000 UTC m=+1187.704104377" observedRunningTime="2026-02-02 13:20:49.427521027 +0000 UTC m=+1189.730035416" watchObservedRunningTime="2026-02-02 13:20:49.483007285 +0000 UTC m=+1189.785521674" Feb 02 13:20:50 crc kubenswrapper[4721]: I0202 13:20:50.060069 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" event={"ID":"0c1486a5-ee95-4cde-9631-3c7c7aa31ae7","Type":"ContainerStarted","Data":"e91d52bb1f7e6c61fa343414482c30f0348d9fe59bf0f15b9460d046775f0008"} Feb 02 13:20:50 crc kubenswrapper[4721]: I0202 13:20:50.113179 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" podStartSLOduration=2.689799218 podStartE2EDuration="36.113162906s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:16.043905922 +0000 UTC m=+1156.346420311" lastFinishedPulling="2026-02-02 13:20:49.46726962 +0000 UTC m=+1189.769783999" observedRunningTime="2026-02-02 13:20:50.107321148 +0000 UTC m=+1190.409835537" watchObservedRunningTime="2026-02-02 13:20:50.113162906 +0000 UTC m=+1190.415677295" Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.089856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" event={"ID":"1c4864d3-2fdd-4b98-ac89-aefb49b56187","Type":"ContainerStarted","Data":"f367019a0258f466a9f8b33ecbf64c6e7081cb25195b2a05c4d0f997ea19028a"} Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.090422 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.093170 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" event={"ID":"39686eda-a258-408b-bf9c-7ff7d515ed9d","Type":"ContainerStarted","Data":"860752c75bf200f066ae6a9f1173c8d0bcb829e80158db2a8541dec6fec31650"} Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.093377 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.111378 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" podStartSLOduration=5.217866862 podStartE2EDuration="38.111361366s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.987341544 +0000 UTC m=+1158.289855933" lastFinishedPulling="2026-02-02 13:20:50.880836048 +0000 UTC m=+1191.183350437" observedRunningTime="2026-02-02 13:20:52.105230651 +0000 UTC m=+1192.407745050" watchObservedRunningTime="2026-02-02 13:20:52.111361366 +0000 UTC m=+1192.413875755" Feb 02 13:20:52 crc kubenswrapper[4721]: I0202 13:20:52.121309 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" podStartSLOduration=4.743463995 podStartE2EDuration="38.121289834s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.571487508 +0000 UTC m=+1157.874001897" lastFinishedPulling="2026-02-02 13:20:50.949313347 +0000 UTC m=+1191.251827736" observedRunningTime="2026-02-02 13:20:52.121059238 +0000 UTC m=+1192.423573627" watchObservedRunningTime="2026-02-02 13:20:52.121289834 +0000 UTC m=+1192.423804223" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.111435 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" event={"ID":"60ff9309-fd37-4618-b4f0-38704a558ec0","Type":"ContainerStarted","Data":"2607181fbcc276cd8a9313ceac32ed74bac35ecf96c2f108e0d4a96ab4d6cd3f"} Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.112002 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.114193 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" event={"ID":"ae636942-3520-410e-b70a-b4fc19a527ca","Type":"ContainerStarted","Data":"0661209edaa19772a1b3da5c382b5f3c6ba963856a2c619fc467140d78dad240"} Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.114269 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.115863 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" event={"ID":"9d11c3e4-10b4-4ff4-aaa2-04e342d984b4","Type":"ContainerStarted","Data":"952925c167a862b67f4f41cbb9552a39c4b27ef647d400a67ea1ed96f7d94a9f"} Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.116016 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.128380 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" podStartSLOduration=5.125579219 podStartE2EDuration="40.128359164s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.877979761 +0000 UTC m=+1158.180494150" lastFinishedPulling="2026-02-02 13:20:52.880759706 +0000 UTC m=+1193.183274095" observedRunningTime="2026-02-02 13:20:54.126560316 +0000 UTC m=+1194.429074705" watchObservedRunningTime="2026-02-02 13:20:54.128359164 +0000 UTC m=+1194.430873563" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.152752 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" podStartSLOduration=35.220100887 podStartE2EDuration="40.152738322s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:47.947095183 +0000 UTC m=+1188.249609572" lastFinishedPulling="2026-02-02 13:20:52.879732618 +0000 UTC m=+1193.182247007" observedRunningTime="2026-02-02 13:20:54.152129656 +0000 UTC m=+1194.454644045" watchObservedRunningTime="2026-02-02 13:20:54.152738322 +0000 UTC m=+1194.455252711" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.198298 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" podStartSLOduration=35.382840762 podStartE2EDuration="40.198273742s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:48.064128253 +0000 UTC m=+1188.366642642" lastFinishedPulling="2026-02-02 13:20:52.879561233 +0000 UTC m=+1193.182075622" observedRunningTime="2026-02-02 13:20:54.180641345 +0000 UTC m=+1194.483155724" watchObservedRunningTime="2026-02-02 13:20:54.198273742 +0000 UTC m=+1194.500788131" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.388546 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-729mv" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.421009 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-q5lbf" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.640753 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qqpfm" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.679513 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.681672 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-8zlv5" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.791713 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-s75st" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.823378 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sq5w5" Feb 02 13:20:54 crc kubenswrapper[4721]: I0202 13:20:54.880498 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x6p4t" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.004556 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sjvjw" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.042751 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-42qq8" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.119629 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rzjts" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.134606 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.137712 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kqdjm" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.358151 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5b9ffd7d65-rgkhb" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.469283 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4pk6v" Feb 02 13:20:55 crc kubenswrapper[4721]: I0202 13:20:55.638461 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-79zrv" Feb 02 13:20:57 crc kubenswrapper[4721]: I0202 13:20:57.565955 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d6c59fb84-5s25p" Feb 02 13:20:59 crc kubenswrapper[4721]: I0202 13:20:59.162893 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" event={"ID":"1f3087b4-acf0-4a27-9696-bdfb4728e96c","Type":"ContainerStarted","Data":"5f2666561bbb7e0fa40c86683a4609b67a11327cccd9e6a4404430aeda5d605a"} Feb 02 13:20:59 crc kubenswrapper[4721]: I0202 13:20:59.163417 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:20:59 crc kubenswrapper[4721]: I0202 13:20:59.190094 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" podStartSLOduration=4.273265062 podStartE2EDuration="45.190050492s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.920491169 +0000 UTC m=+1158.223005558" lastFinishedPulling="2026-02-02 13:20:58.837276599 +0000 UTC m=+1199.139790988" observedRunningTime="2026-02-02 13:20:59.180008881 +0000 UTC m=+1199.482523290" watchObservedRunningTime="2026-02-02 13:20:59.190050492 +0000 UTC m=+1199.492564891" Feb 02 13:21:01 crc kubenswrapper[4721]: I0202 13:21:01.182612 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" event={"ID":"b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa","Type":"ContainerStarted","Data":"2f9ba09e62dbc068ec3f14143c3c6fb998bfa7833571f655916f1e818ff9ebc6"} Feb 02 13:21:01 crc kubenswrapper[4721]: I0202 13:21:01.183414 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:21:01 crc kubenswrapper[4721]: I0202 13:21:01.199646 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" podStartSLOduration=4.048929307 podStartE2EDuration="47.199611979s" podCreationTimestamp="2026-02-02 13:20:14 +0000 UTC" firstStartedPulling="2026-02-02 13:20:17.805553877 +0000 UTC m=+1158.108068266" lastFinishedPulling="2026-02-02 13:21:00.956236549 +0000 UTC m=+1201.258750938" observedRunningTime="2026-02-02 13:21:01.196864325 +0000 UTC m=+1201.499378724" watchObservedRunningTime="2026-02-02 13:21:01.199611979 +0000 UTC m=+1201.502126368" Feb 02 13:21:04 crc kubenswrapper[4721]: I0202 13:21:04.930278 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5x28t" Feb 02 13:21:04 crc kubenswrapper[4721]: I0202 13:21:04.987538 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ct6hc" Feb 02 13:21:05 crc kubenswrapper[4721]: I0202 13:21:05.060890 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-6z258" Feb 02 13:21:05 crc kubenswrapper[4721]: I0202 13:21:05.696434 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2828d" Feb 02 13:21:06 crc kubenswrapper[4721]: I0202 13:21:06.714565 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-hktcl" Feb 02 13:21:06 crc kubenswrapper[4721]: I0202 13:21:06.881196 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw" Feb 02 13:21:14 crc kubenswrapper[4721]: I0202 13:21:14.764714 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:21:14 crc kubenswrapper[4721]: I0202 13:21:14.765593 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:21:14 crc kubenswrapper[4721]: I0202 13:21:14.969458 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-5vbh8" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.392397 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.406831 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.412447 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9bfjx" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.412900 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.412927 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.413196 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.429183 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pk9z\" (UniqueName: \"kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.429420 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.447943 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.480885 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.482449 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.484442 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.504897 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.530770 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.531144 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdpg\" (UniqueName: \"kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.531191 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.531225 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.531248 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pk9z\" (UniqueName: \"kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.533205 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.556147 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pk9z\" (UniqueName: \"kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z\") pod \"dnsmasq-dns-675f4bcbfc-dx98r\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.632357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.632411 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.632516 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdpg\" (UniqueName: \"kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.666384 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdpg\" (UniqueName: \"kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.677686 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.677713 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppnbl\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.737754 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:31 crc kubenswrapper[4721]: I0202 13:21:31.801085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:32 crc kubenswrapper[4721]: W0202 13:21:32.228807 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb256785d_0ae0_454d_8927_a28668507e06.slice/crio-ffe8ae8e550e5fe561d651b70087eeacc3e6b311c6a4ad871733c9d878db06b3 WatchSource:0}: Error finding container ffe8ae8e550e5fe561d651b70087eeacc3e6b311c6a4ad871733c9d878db06b3: Status 404 returned error can't find the container with id ffe8ae8e550e5fe561d651b70087eeacc3e6b311c6a4ad871733c9d878db06b3 Feb 02 13:21:32 crc kubenswrapper[4721]: I0202 13:21:32.234472 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:32 crc kubenswrapper[4721]: I0202 13:21:32.330733 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:32 crc kubenswrapper[4721]: W0202 13:21:32.335221 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded37b19_4830_4437_9a53_778f826f3582.slice/crio-e4ecfe7e04cb7aa8d37b9511552603f7febbe5a304c908615695a3cf3f4a961f WatchSource:0}: Error finding container e4ecfe7e04cb7aa8d37b9511552603f7febbe5a304c908615695a3cf3f4a961f: Status 404 returned error can't find the container with id e4ecfe7e04cb7aa8d37b9511552603f7febbe5a304c908615695a3cf3f4a961f Feb 02 13:21:32 crc kubenswrapper[4721]: I0202 13:21:32.439330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" event={"ID":"ded37b19-4830-4437-9a53-778f826f3582","Type":"ContainerStarted","Data":"e4ecfe7e04cb7aa8d37b9511552603f7febbe5a304c908615695a3cf3f4a961f"} Feb 02 13:21:32 crc kubenswrapper[4721]: I0202 13:21:32.442030 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" event={"ID":"b256785d-0ae0-454d-8927-a28668507e06","Type":"ContainerStarted","Data":"ffe8ae8e550e5fe561d651b70087eeacc3e6b311c6a4ad871733c9d878db06b3"} Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.313605 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.338761 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.340697 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.352376 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.483920 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.484288 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8wm\" (UniqueName: \"kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.484330 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.588227 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8wm\" (UniqueName: \"kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.589776 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.589858 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.589986 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.591926 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.631090 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8wm\" (UniqueName: \"kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm\") pod \"dnsmasq-dns-666b6646f7-h9xj9\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.659549 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.666293 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.687008 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.689760 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.711382 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.799262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.799395 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqwx\" (UniqueName: \"kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.799428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.903130 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqwx\" (UniqueName: \"kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.903552 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.903677 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.904707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.905083 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:34 crc kubenswrapper[4721]: I0202 13:21:34.930896 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqwx\" (UniqueName: \"kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx\") pod \"dnsmasq-dns-57d769cc4f-ljjc5\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.147397 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.320562 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:21:35 crc kubenswrapper[4721]: W0202 13:21:35.321220 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod242f2c9f_2150_4d1a_8c40_9e34f1ffc5ff.slice/crio-f45d2f8df0c32334c973de797a13b02d6fc09764b2e977590919ddc80c08dc8f WatchSource:0}: Error finding container f45d2f8df0c32334c973de797a13b02d6fc09764b2e977590919ddc80c08dc8f: Status 404 returned error can't find the container with id f45d2f8df0c32334c973de797a13b02d6fc09764b2e977590919ddc80c08dc8f Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.471060 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.473179 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.475821 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.476569 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.476751 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.477015 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.477206 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.477208 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rc7jr" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.477232 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.506540 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.516120 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517206 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a57cea33-806c-4028-b59f-9f5e65289eac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517315 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517350 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517376 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd9k\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-kube-api-access-zkd9k\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a57cea33-806c-4028-b59f-9f5e65289eac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517689 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517719 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-config-data\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517744 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.517768 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.519201 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" event={"ID":"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff","Type":"ContainerStarted","Data":"f45d2f8df0c32334c973de797a13b02d6fc09764b2e977590919ddc80c08dc8f"} Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.519310 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.524992 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.526946 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.534525 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.545821 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620580 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a57cea33-806c-4028-b59f-9f5e65289eac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620737 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620782 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-config-data\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620819 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.620937 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a57cea33-806c-4028-b59f-9f5e65289eac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.621014 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.621045 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.621091 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.621123 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkd9k\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-kube-api-access-zkd9k\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.622511 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-config-data\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.623703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.624204 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a57cea33-806c-4028-b59f-9f5e65289eac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.625112 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.630269 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.630671 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.630703 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f768cdfd31e4466d54219143b93bec72e999993fecffa835aa705bdc902f25fb/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.631556 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.632428 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a57cea33-806c-4028-b59f-9f5e65289eac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.633145 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a57cea33-806c-4028-b59f-9f5e65289eac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.633178 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a57cea33-806c-4028-b59f-9f5e65289eac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.638796 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.645417 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkd9k\" (UniqueName: \"kubernetes.io/projected/a57cea33-806c-4028-b59f-9f5e65289eac-kube-api-access-zkd9k\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.663868 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ffe0936-af63-4f85-b1ec-8b9cf7b1c000\") pod \"rabbitmq-server-0\" (UID: \"a57cea33-806c-4028-b59f-9f5e65289eac\") " pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.722967 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.723339 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.723368 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724162 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdnv\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-kube-api-access-7bdnv\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724275 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724335 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d21d961-1540-4610-89c0-ee265f66d728-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724402 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724460 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724509 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724532 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.724625 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6dbd607-3fa8-48e0-b420-4e939a47c460-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726757 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726857 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726906 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-config-data\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726957 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.726980 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.727038 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6dbd607-3fa8-48e0-b420-4e939a47c460-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.727097 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d21d961-1540-4610-89c0-ee265f66d728-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.727128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hj7s\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-kube-api-access-9hj7s\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.824794 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.828741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.829781 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.829740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.829857 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-config-data\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.829887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.831015 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.831095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d21d961-1540-4610-89c0-ee265f66d728-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.831180 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-config-data\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.832747 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833384 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6dbd607-3fa8-48e0-b420-4e939a47c460-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833422 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d21d961-1540-4610-89c0-ee265f66d728-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hj7s\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-kube-api-access-9hj7s\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833535 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833573 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833601 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833701 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdnv\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-kube-api-access-7bdnv\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d21d961-1540-4610-89c0-ee265f66d728-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833829 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833856 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833921 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.833950 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.834017 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6dbd607-3fa8-48e0-b420-4e939a47c460-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.834141 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.834944 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.835235 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.835272 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.836338 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6dbd607-3fa8-48e0-b420-4e939a47c460-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.836680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.837160 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6dbd607-3fa8-48e0-b420-4e939a47c460-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.837476 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.839845 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d21d961-1540-4610-89c0-ee265f66d728-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.845885 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6dbd607-3fa8-48e0-b420-4e939a47c460-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.845895 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.845955 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f75a550d0755a9bbbcdd150a61fd98bebb2f378fbc6fe94c4a9320c6ab4aa089/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.845964 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d21d961-1540-4610-89c0-ee265f66d728-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846054 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846084 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846531 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846760 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846793 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82d0d0f3d0e2d4632b5139e9a2e3120dee94db4e9ede7c4dd6d6473a90916d83/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.846967 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.855266 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdnv\" (UniqueName: \"kubernetes.io/projected/4d21d961-1540-4610-89c0-ee265f66d728-kube-api-access-7bdnv\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.856701 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hj7s\" (UniqueName: \"kubernetes.io/projected/b6dbd607-3fa8-48e0-b420-4e939a47c460-kube-api-access-9hj7s\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.863225 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.864890 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.869769 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.869884 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.870168 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.870479 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r4bj5" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.870613 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.870660 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.870731 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.876765 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.896119 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21ba8be-1b43-4a8e-b03c-2a4e8cecb516\") pod \"rabbitmq-server-2\" (UID: \"b6dbd607-3fa8-48e0-b420-4e939a47c460\") " pod="openstack/rabbitmq-server-2" Feb 02 13:21:35 crc kubenswrapper[4721]: I0202 13:21:35.969711 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a64b0146-2b46-4525-a4ab-0ae755e09ffc\") pod \"rabbitmq-server-1\" (UID: \"4d21d961-1540-4610-89c0-ee265f66d728\") " pod="openstack/rabbitmq-server-1" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045080 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045149 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045180 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045204 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045227 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npm8v\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-kube-api-access-npm8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045244 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045277 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/496bb19e-217b-4896-9bee-8082ac5da28b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045297 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045320 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045349 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/496bb19e-217b-4896-9bee-8082ac5da28b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.045372 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151173 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151260 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151321 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151346 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npm8v\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-kube-api-access-npm8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151397 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/496bb19e-217b-4896-9bee-8082ac5da28b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151426 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151455 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/496bb19e-217b-4896-9bee-8082ac5da28b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151516 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.151617 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.152173 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.153785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.154568 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.155774 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.170963 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.171880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/496bb19e-217b-4896-9bee-8082ac5da28b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.171924 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.173918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/496bb19e-217b-4896-9bee-8082ac5da28b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.175871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.176328 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.197657 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/496bb19e-217b-4896-9bee-8082ac5da28b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.232722 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.232961 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/73a1434e638350512a0f0c04a2f3b8af25c25424644f85a28328e741ff86171d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.250138 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npm8v\" (UniqueName: \"kubernetes.io/projected/496bb19e-217b-4896-9bee-8082ac5da28b-kube-api-access-npm8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.293346 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e97c06f6-7cba-4e06-8735-05d7d5af9e90\") pod \"rabbitmq-cell1-server-0\" (UID: \"496bb19e-217b-4896-9bee-8082ac5da28b\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:36 crc kubenswrapper[4721]: I0202 13:21:36.562675 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.271485 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.273103 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.278507 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hs7j6" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.278908 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.279017 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.279148 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.287343 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.290477 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.377479 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.378033 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.378263 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.378296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.378566 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.378898 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbksr\" (UniqueName: \"kubernetes.io/projected/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kube-api-access-kbksr\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.379155 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67f27995-a569-40a0-9fab-b26d2604807c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f27995-a569-40a0-9fab-b26d2604807c\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.379282 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.480960 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481049 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbksr\" (UniqueName: \"kubernetes.io/projected/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kube-api-access-kbksr\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481136 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67f27995-a569-40a0-9fab-b26d2604807c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f27995-a569-40a0-9fab-b26d2604807c\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481177 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481208 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481233 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.481296 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.482018 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.482253 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kolla-config\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.482537 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-config-data-default\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.483180 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.485301 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.485836 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.506749 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbksr\" (UniqueName: \"kubernetes.io/projected/f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4-kube-api-access-kbksr\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.509234 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.509287 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67f27995-a569-40a0-9fab-b26d2604807c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f27995-a569-40a0-9fab-b26d2604807c\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02cfc071c80aa38381e977cb3a0a7f7727c6750de0c68f48e91714708c5e03a5/globalmount\"" pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.539651 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67f27995-a569-40a0-9fab-b26d2604807c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67f27995-a569-40a0-9fab-b26d2604807c\") pod \"openstack-galera-0\" (UID: \"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4\") " pod="openstack/openstack-galera-0" Feb 02 13:21:37 crc kubenswrapper[4721]: I0202 13:21:37.603552 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.731806 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.733761 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.735968 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.736420 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z2vw8" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.736604 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.741909 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.751544 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908429 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908478 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908529 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908565 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtvc\" (UniqueName: \"kubernetes.io/projected/2505bd6b-64d4-4d17-9c1a-0e89562612be-kube-api-access-pgtvc\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908590 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908633 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908657 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.908680 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.937352 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.939225 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.943666 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q9dzm" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.943688 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.943799 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 13:21:38 crc kubenswrapper[4721]: I0202 13:21:38.958184 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011398 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011437 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011570 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011639 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011678 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtvc\" (UniqueName: \"kubernetes.io/projected/2505bd6b-64d4-4d17-9c1a-0e89562612be-kube-api-access-pgtvc\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.011710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.012051 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.012582 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.012637 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.014011 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2505bd6b-64d4-4d17-9c1a-0e89562612be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.015922 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.015954 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/95677eff6a9bdedc5b505c2f3bdc046d1b5b0ef5e3e705c1b7f37b1888d23524/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.018305 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.032761 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2505bd6b-64d4-4d17-9c1a-0e89562612be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.032934 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtvc\" (UniqueName: \"kubernetes.io/projected/2505bd6b-64d4-4d17-9c1a-0e89562612be-kube-api-access-pgtvc\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.073371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8564e4e5-bb86-4518-8e65-498ae90b7207\") pod \"openstack-cell1-galera-0\" (UID: \"2505bd6b-64d4-4d17-9c1a-0e89562612be\") " pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.113404 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-kolla-config\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.113503 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.113556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-config-data\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.113596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shkz9\" (UniqueName: \"kubernetes.io/projected/a686ac60-f231-4070-98c7-7acbc66c29d5-kube-api-access-shkz9\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.113634 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.214797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.214881 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-config-data\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.214927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shkz9\" (UniqueName: \"kubernetes.io/projected/a686ac60-f231-4070-98c7-7acbc66c29d5-kube-api-access-shkz9\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.214976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.215040 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-kolla-config\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.215958 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-kolla-config\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.216515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a686ac60-f231-4070-98c7-7acbc66c29d5-config-data\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.221126 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.221496 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a686ac60-f231-4070-98c7-7acbc66c29d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.235877 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shkz9\" (UniqueName: \"kubernetes.io/projected/a686ac60-f231-4070-98c7-7acbc66c29d5-kube-api-access-shkz9\") pod \"memcached-0\" (UID: \"a686ac60-f231-4070-98c7-7acbc66c29d5\") " pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.256224 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 13:21:39 crc kubenswrapper[4721]: I0202 13:21:39.362474 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 13:21:40 crc kubenswrapper[4721]: W0202 13:21:40.087712 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9598cea_e831_47fd_aa1a_08060e23bba2.slice/crio-dcd5d79dba084b60aa8e85cba431f52f20415983b8d07817faa29db86eef35ea WatchSource:0}: Error finding container dcd5d79dba084b60aa8e85cba431f52f20415983b8d07817faa29db86eef35ea: Status 404 returned error can't find the container with id dcd5d79dba084b60aa8e85cba431f52f20415983b8d07817faa29db86eef35ea Feb 02 13:21:40 crc kubenswrapper[4721]: I0202 13:21:40.572021 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" event={"ID":"f9598cea-e831-47fd-aa1a-08060e23bba2","Type":"ContainerStarted","Data":"dcd5d79dba084b60aa8e85cba431f52f20415983b8d07817faa29db86eef35ea"} Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.076544 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.079231 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.089097 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-78vw2" Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.122114 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.154253 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrn8\" (UniqueName: \"kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8\") pod \"kube-state-metrics-0\" (UID: \"cc071000-a602-4de6-a9bc-1c93b6d58c25\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.275321 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrn8\" (UniqueName: \"kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8\") pod \"kube-state-metrics-0\" (UID: \"cc071000-a602-4de6-a9bc-1c93b6d58c25\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.338350 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrn8\" (UniqueName: \"kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8\") pod \"kube-state-metrics-0\" (UID: \"cc071000-a602-4de6-a9bc-1c93b6d58c25\") " pod="openstack/kube-state-metrics-0" Feb 02 13:21:41 crc kubenswrapper[4721]: I0202 13:21:41.454474 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.036931 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.038152 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.042173 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.042354 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-vlt46" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.050621 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.193438 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lp8f\" (UniqueName: \"kubernetes.io/projected/6064a9a4-2316-4bdd-abf1-934e9167528a-kube-api-access-7lp8f\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.193765 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.296007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lp8f\" (UniqueName: \"kubernetes.io/projected/6064a9a4-2316-4bdd-abf1-934e9167528a-kube-api-access-7lp8f\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.296081 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: E0202 13:21:42.296231 4721 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 02 13:21:42 crc kubenswrapper[4721]: E0202 13:21:42.296306 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert podName:6064a9a4-2316-4bdd-abf1-934e9167528a nodeName:}" failed. No retries permitted until 2026-02-02 13:21:42.796278648 +0000 UTC m=+1243.098793037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert") pod "observability-ui-dashboards-66cbf594b5-6lvhx" (UID: "6064a9a4-2316-4bdd-abf1-934e9167528a") : secret "observability-ui-dashboards" not found Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.323822 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lp8f\" (UniqueName: \"kubernetes.io/projected/6064a9a4-2316-4bdd-abf1-934e9167528a-kube-api-access-7lp8f\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.431821 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cc4f8f495-dxd8x"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.433687 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.459305 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.462889 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.468742 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.469366 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.469647 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xt8ls" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.470251 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.471532 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.472366 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.481505 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.481690 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.532189 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc4f8f495-dxd8x"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.566392 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601141 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz2v\" (UniqueName: \"kubernetes.io/projected/20201931-5a9c-4f86-ad4d-1df122372f8a-kube-api-access-mbz2v\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601222 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601249 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601267 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-trusted-ca-bundle\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5th\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601321 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-console-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601349 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601393 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601416 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601469 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601489 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-oauth-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601560 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601576 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-service-ca\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601592 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-oauth-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.601611 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703433 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703506 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-service-ca\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703549 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-oauth-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703591 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.703614 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz2v\" (UniqueName: \"kubernetes.io/projected/20201931-5a9c-4f86-ad4d-1df122372f8a-kube-api-access-mbz2v\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704547 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-oauth-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704665 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-service-ca\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704701 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704764 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-trusted-ca-bundle\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704830 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704863 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5th\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704883 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-console-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.704927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705043 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705407 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705456 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705503 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-oauth-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705733 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-console-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.705862 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.706128 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20201931-5a9c-4f86-ad4d-1df122372f8a-trusted-ca-bundle\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.706632 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.706728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.716815 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.717736 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.720653 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.720863 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-oauth-config\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.724380 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.726583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.727447 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5th\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.735902 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.735946 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8f571356fe2cb3489fcae1580a11d6ada33fdec8c8d1e0850e45e91197c9652/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.736012 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz2v\" (UniqueName: \"kubernetes.io/projected/20201931-5a9c-4f86-ad4d-1df122372f8a-kube-api-access-mbz2v\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.747648 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20201931-5a9c-4f86-ad4d-1df122372f8a-console-serving-cert\") pod \"console-7cc4f8f495-dxd8x\" (UID: \"20201931-5a9c-4f86-ad4d-1df122372f8a\") " pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.768289 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.811993 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.814396 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.821030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6064a9a4-2316-4bdd-abf1-934e9167528a-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-6lvhx\" (UID: \"6064a9a4-2316-4bdd-abf1-934e9167528a\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:42 crc kubenswrapper[4721]: I0202 13:21:42.965695 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.110685 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.757030 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.759057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.764759 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.764905 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.765108 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5jqnn" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.765213 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.765379 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.789574 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832263 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pzpz\" (UniqueName: \"kubernetes.io/projected/080bfc29-50bc-4ba1-b097-4f5c54586d8c-kube-api-access-5pzpz\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832583 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832610 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832641 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832705 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832760 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-config\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832793 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f242d59e-e98d-4b10-969c-f41a7a663807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f242d59e-e98d-4b10-969c-f41a7a663807\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.832848 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.934552 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-config\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.935920 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f242d59e-e98d-4b10-969c-f41a7a663807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f242d59e-e98d-4b10-969c-f41a7a663807\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.936209 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.936466 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pzpz\" (UniqueName: \"kubernetes.io/projected/080bfc29-50bc-4ba1-b097-4f5c54586d8c-kube-api-access-5pzpz\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.936649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.936755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.936970 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.935855 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-config\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.938521 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.939793 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.939846 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f242d59e-e98d-4b10-969c-f41a7a663807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f242d59e-e98d-4b10-969c-f41a7a663807\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e57f4511cabc27666ca4cabeef41edec6a12ceaf849ba65a34eca7aaee98ae69/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.940245 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.941757 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080bfc29-50bc-4ba1-b097-4f5c54586d8c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.948235 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.948850 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.950445 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080bfc29-50bc-4ba1-b097-4f5c54586d8c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.965246 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pzpz\" (UniqueName: \"kubernetes.io/projected/080bfc29-50bc-4ba1-b097-4f5c54586d8c-kube-api-access-5pzpz\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:43 crc kubenswrapper[4721]: I0202 13:21:43.986822 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f242d59e-e98d-4b10-969c-f41a7a663807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f242d59e-e98d-4b10-969c-f41a7a663807\") pod \"ovsdbserver-nb-0\" (UID: \"080bfc29-50bc-4ba1-b097-4f5c54586d8c\") " pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:44 crc kubenswrapper[4721]: I0202 13:21:44.100914 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 13:21:44 crc kubenswrapper[4721]: I0202 13:21:44.763463 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:21:44 crc kubenswrapper[4721]: I0202 13:21:44.763514 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.586468 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l5h78"] Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.589406 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.594957 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.595030 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.595971 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bg96d" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.607148 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78"] Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.636012 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gz9nz"] Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.638230 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.651920 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gz9nz"] Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.682931 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rqp\" (UniqueName: \"kubernetes.io/projected/298ac2ef-6edb-40cb-bb92-8a8e039f333b-kube-api-access-22rqp\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.682993 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-ovn-controller-tls-certs\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.683116 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-combined-ca-bundle\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.683143 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/298ac2ef-6edb-40cb-bb92-8a8e039f333b-scripts\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.683177 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.683250 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-log-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.683270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785089 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-log\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785188 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-log-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785214 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-run\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785236 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c2xw\" (UniqueName: \"kubernetes.io/projected/a75df612-e3f4-4ea3-bfc8-daceaf59205d-kube-api-access-8c2xw\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785292 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-etc-ovs\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785394 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rqp\" (UniqueName: \"kubernetes.io/projected/298ac2ef-6edb-40cb-bb92-8a8e039f333b-kube-api-access-22rqp\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785457 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75df612-e3f4-4ea3-bfc8-daceaf59205d-scripts\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-ovn-controller-tls-certs\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785552 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-lib\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785585 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-combined-ca-bundle\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785605 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/298ac2ef-6edb-40cb-bb92-8a8e039f333b-scripts\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.785635 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.786211 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.786333 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-log-ovn\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.786438 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/298ac2ef-6edb-40cb-bb92-8a8e039f333b-var-run\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.791011 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/298ac2ef-6edb-40cb-bb92-8a8e039f333b-scripts\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.791469 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-combined-ca-bundle\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.794089 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/298ac2ef-6edb-40cb-bb92-8a8e039f333b-ovn-controller-tls-certs\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.806642 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rqp\" (UniqueName: \"kubernetes.io/projected/298ac2ef-6edb-40cb-bb92-8a8e039f333b-kube-api-access-22rqp\") pod \"ovn-controller-l5h78\" (UID: \"298ac2ef-6edb-40cb-bb92-8a8e039f333b\") " pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.888016 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-lib\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.888437 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-lib\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.888824 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-log\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.888956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-run\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.888993 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c2xw\" (UniqueName: \"kubernetes.io/projected/a75df612-e3f4-4ea3-bfc8-daceaf59205d-kube-api-access-8c2xw\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.889032 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-etc-ovs\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.889114 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75df612-e3f4-4ea3-bfc8-daceaf59205d-scripts\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.889884 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-run\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.891033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-var-log\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.891273 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a75df612-e3f4-4ea3-bfc8-daceaf59205d-etc-ovs\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.891517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75df612-e3f4-4ea3-bfc8-daceaf59205d-scripts\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.906308 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c2xw\" (UniqueName: \"kubernetes.io/projected/a75df612-e3f4-4ea3-bfc8-daceaf59205d-kube-api-access-8c2xw\") pod \"ovn-controller-ovs-gz9nz\" (UID: \"a75df612-e3f4-4ea3-bfc8-daceaf59205d\") " pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.938368 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78" Feb 02 13:21:45 crc kubenswrapper[4721]: I0202 13:21:45.967606 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.480107 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.483126 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.485082 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.486717 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.486858 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.486967 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-t4kxj" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.500722 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.557412 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.557638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.557717 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.557891 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.558034 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.558172 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.558207 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjz9\" (UniqueName: \"kubernetes.io/projected/4e175d27-fe10-4fb7-9ce6-cb98379357cc-kube-api-access-snjz9\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.558284 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659705 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659827 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659876 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659933 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.659986 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.660709 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.660855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjz9\" (UniqueName: \"kubernetes.io/projected/4e175d27-fe10-4fb7-9ce6-cb98379357cc-kube-api-access-snjz9\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.660904 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.661354 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.662705 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e175d27-fe10-4fb7-9ce6-cb98379357cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.665559 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.668410 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.669716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e175d27-fe10-4fb7-9ce6-cb98379357cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.691980 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.692028 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a56f1e81213f20d53ad52bdb2f1d153ff7361cb2b5b7b048fab2231a40ceec0/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.732197 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjz9\" (UniqueName: \"kubernetes.io/projected/4e175d27-fe10-4fb7-9ce6-cb98379357cc-kube-api-access-snjz9\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.798925 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fd03a9a-65ca-4f76-8f7b-ac794e369792\") pod \"ovsdbserver-sb-0\" (UID: \"4e175d27-fe10-4fb7-9ce6-cb98379357cc\") " pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:48 crc kubenswrapper[4721]: I0202 13:21:48.830529 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 13:21:51 crc kubenswrapper[4721]: I0202 13:21:51.767227 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.404417 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.405398 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzdpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ppnbl_openstack(ded37b19-4830-4437-9a53-778f826f3582): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.407465 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" podUID="ded37b19-4830-4437-9a53-778f826f3582" Feb 02 13:21:52 crc kubenswrapper[4721]: W0202 13:21:52.429383 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d21d961_1540_4610_89c0_ee265f66d728.slice/crio-36afb1dca2e8a69b46d5d0b644f99e96709f045f97e67bf1482c430f4244c578 WatchSource:0}: Error finding container 36afb1dca2e8a69b46d5d0b644f99e96709f045f97e67bf1482c430f4244c578: Status 404 returned error can't find the container with id 36afb1dca2e8a69b46d5d0b644f99e96709f045f97e67bf1482c430f4244c578 Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.531475 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.531703 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pk9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dx98r_openstack(b256785d-0ae0-454d-8927-a28668507e06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:21:52 crc kubenswrapper[4721]: E0202 13:21:52.534977 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" podUID="b256785d-0ae0-454d-8927-a28668507e06" Feb 02 13:21:52 crc kubenswrapper[4721]: I0202 13:21:52.808930 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d21d961-1540-4610-89c0-ee265f66d728","Type":"ContainerStarted","Data":"36afb1dca2e8a69b46d5d0b644f99e96709f045f97e67bf1482c430f4244c578"} Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.415697 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 13:21:53 crc kubenswrapper[4721]: W0202 13:21:53.424494 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda57cea33_806c_4028_b59f_9f5e65289eac.slice/crio-426dd2c7f5f6aedf336052b0eb98f8b62ecc6c202c3b2a5958bd85ca968a7a63 WatchSource:0}: Error finding container 426dd2c7f5f6aedf336052b0eb98f8b62ecc6c202c3b2a5958bd85ca968a7a63: Status 404 returned error can't find the container with id 426dd2c7f5f6aedf336052b0eb98f8b62ecc6c202c3b2a5958bd85ca968a7a63 Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.846644 4721 generic.go:334] "Generic (PLEG): container finished" podID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerID="3dbaa0b96c9ac5ea0094bc68c273d9176a5f00da8a9565f16566f8f212818281" exitCode=0 Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.846985 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" event={"ID":"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff","Type":"ContainerDied","Data":"3dbaa0b96c9ac5ea0094bc68c273d9176a5f00da8a9565f16566f8f212818281"} Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.860037 4721 generic.go:334] "Generic (PLEG): container finished" podID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerID="a20ee72f3c2584aa99632a64dbf1cb3b2b0b3ad83db068faf7bb964a8d3d6314" exitCode=0 Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.860193 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" event={"ID":"f9598cea-e831-47fd-aa1a-08060e23bba2","Type":"ContainerDied","Data":"a20ee72f3c2584aa99632a64dbf1cb3b2b0b3ad83db068faf7bb964a8d3d6314"} Feb 02 13:21:53 crc kubenswrapper[4721]: I0202 13:21:53.865889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a57cea33-806c-4028-b59f-9f5e65289eac","Type":"ContainerStarted","Data":"426dd2c7f5f6aedf336052b0eb98f8b62ecc6c202c3b2a5958bd85ca968a7a63"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.128773 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.202557 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.215004 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.230394 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.250606 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.254131 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config\") pod \"b256785d-0ae0-454d-8927-a28668507e06\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.254817 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config" (OuterVolumeSpecName: "config") pod "b256785d-0ae0-454d-8927-a28668507e06" (UID: "b256785d-0ae0-454d-8927-a28668507e06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.255028 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pk9z\" (UniqueName: \"kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z\") pod \"b256785d-0ae0-454d-8927-a28668507e06\" (UID: \"b256785d-0ae0-454d-8927-a28668507e06\") " Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.257413 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b256785d-0ae0-454d-8927-a28668507e06-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.259234 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.262846 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z" (OuterVolumeSpecName: "kube-api-access-6pk9z") pod "b256785d-0ae0-454d-8927-a28668507e06" (UID: "b256785d-0ae0-454d-8927-a28668507e06"). InnerVolumeSpecName "kube-api-access-6pk9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:54 crc kubenswrapper[4721]: W0202 13:21:54.262777 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod754114a2_a012_43fe_923b_a8cc3df91aa0.slice/crio-1ae72f1d8842f29156e47082662e67e943468c1763e7bc830d0036db2470a455 WatchSource:0}: Error finding container 1ae72f1d8842f29156e47082662e67e943468c1763e7bc830d0036db2470a455: Status 404 returned error can't find the container with id 1ae72f1d8842f29156e47082662e67e943468c1763e7bc830d0036db2470a455 Feb 02 13:21:54 crc kubenswrapper[4721]: W0202 13:21:54.278596 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6dbd607_3fa8_48e0_b420_4e939a47c460.slice/crio-ce0a1e8e0cd036f62ef8c895b722638e8b437bbacf86893de06773f534d64f9d WatchSource:0}: Error finding container ce0a1e8e0cd036f62ef8c895b722638e8b437bbacf86893de06773f534d64f9d: Status 404 returned error can't find the container with id ce0a1e8e0cd036f62ef8c895b722638e8b437bbacf86893de06773f534d64f9d Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.279504 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cc4f8f495-dxd8x"] Feb 02 13:21:54 crc kubenswrapper[4721]: W0202 13:21:54.286903 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda686ac60_f231_4070_98c7_7acbc66c29d5.slice/crio-944bebc705f70fcbcfaf0716e490e26611866c14af425ec38457dd857b00bcb4 WatchSource:0}: Error finding container 944bebc705f70fcbcfaf0716e490e26611866c14af425ec38457dd857b00bcb4: Status 404 returned error can't find the container with id 944bebc705f70fcbcfaf0716e490e26611866c14af425ec38457dd857b00bcb4 Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.288510 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: W0202 13:21:54.295415 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20201931_5a9c_4f86_ad4d_1df122372f8a.slice/crio-48a469129cff175d8479cfd49feb85c3e4574b8814b0a845b5cd5bc1d5df1849 WatchSource:0}: Error finding container 48a469129cff175d8479cfd49feb85c3e4574b8814b0a845b5cd5bc1d5df1849: Status 404 returned error can't find the container with id 48a469129cff175d8479cfd49feb85c3e4574b8814b0a845b5cd5bc1d5df1849 Feb 02 13:21:54 crc kubenswrapper[4721]: W0202 13:21:54.298512 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod496bb19e_217b_4896_9bee_8082ac5da28b.slice/crio-8df7987520f710454284227fffc5726ebb458e3dc37e41c9c6583e8675accf28 WatchSource:0}: Error finding container 8df7987520f710454284227fffc5726ebb458e3dc37e41c9c6583e8675accf28: Status 404 returned error can't find the container with id 8df7987520f710454284227fffc5726ebb458e3dc37e41c9c6583e8675accf28 Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.360072 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc\") pod \"ded37b19-4830-4437-9a53-778f826f3582\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.360311 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdpg\" (UniqueName: \"kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg\") pod \"ded37b19-4830-4437-9a53-778f826f3582\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.360579 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config\") pod \"ded37b19-4830-4437-9a53-778f826f3582\" (UID: \"ded37b19-4830-4437-9a53-778f826f3582\") " Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.360986 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ded37b19-4830-4437-9a53-778f826f3582" (UID: "ded37b19-4830-4437-9a53-778f826f3582"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.361160 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.361181 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pk9z\" (UniqueName: \"kubernetes.io/projected/b256785d-0ae0-454d-8927-a28668507e06-kube-api-access-6pk9z\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.361949 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config" (OuterVolumeSpecName: "config") pod "ded37b19-4830-4437-9a53-778f826f3582" (UID: "ded37b19-4830-4437-9a53-778f826f3582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.366052 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg" (OuterVolumeSpecName: "kube-api-access-hzdpg") pod "ded37b19-4830-4437-9a53-778f826f3582" (UID: "ded37b19-4830-4437-9a53-778f826f3582"). InnerVolumeSpecName "kube-api-access-hzdpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.463366 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded37b19-4830-4437-9a53-778f826f3582-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.463403 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdpg\" (UniqueName: \"kubernetes.io/projected/ded37b19-4830-4437-9a53-778f826f3582-kube-api-access-hzdpg\") on node \"crc\" DevicePath \"\"" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.720970 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.796001 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.805238 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.817070 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.838154 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.881218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerStarted","Data":"1ae72f1d8842f29156e47082662e67e943468c1763e7bc830d0036db2470a455"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.886163 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" event={"ID":"ded37b19-4830-4437-9a53-778f826f3582","Type":"ContainerDied","Data":"e4ecfe7e04cb7aa8d37b9511552603f7febbe5a304c908615695a3cf3f4a961f"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.886236 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppnbl" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.891795 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" event={"ID":"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff","Type":"ContainerStarted","Data":"e39b433adf3c414a77139ad47d46e6db4a99ca06421ae738ff6dda51a5079476"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.891868 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.893054 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.894287 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc4f8f495-dxd8x" event={"ID":"20201931-5a9c-4f86-ad4d-1df122372f8a","Type":"ContainerStarted","Data":"ee31b8ae295838a63c0cbb3c51e04ccd55e0d107e0b3898e2c0f6e81f4671670"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.894324 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cc4f8f495-dxd8x" event={"ID":"20201931-5a9c-4f86-ad4d-1df122372f8a","Type":"ContainerStarted","Data":"48a469129cff175d8479cfd49feb85c3e4574b8814b0a845b5cd5bc1d5df1849"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.896149 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2505bd6b-64d4-4d17-9c1a-0e89562612be","Type":"ContainerStarted","Data":"59039b072cf0feef66db7d58bb9d13813151414ad72df9f948c8abd862ab710b"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.898785 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b6dbd607-3fa8-48e0-b420-4e939a47c460","Type":"ContainerStarted","Data":"ce0a1e8e0cd036f62ef8c895b722638e8b437bbacf86893de06773f534d64f9d"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.901046 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"496bb19e-217b-4896-9bee-8082ac5da28b","Type":"ContainerStarted","Data":"8df7987520f710454284227fffc5726ebb458e3dc37e41c9c6583e8675accf28"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.902628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" event={"ID":"b256785d-0ae0-454d-8927-a28668507e06","Type":"ContainerDied","Data":"ffe8ae8e550e5fe561d651b70087eeacc3e6b311c6a4ad871733c9d878db06b3"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.902671 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dx98r" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.907036 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a686ac60-f231-4070-98c7-7acbc66c29d5","Type":"ContainerStarted","Data":"944bebc705f70fcbcfaf0716e490e26611866c14af425ec38457dd857b00bcb4"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.910317 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" event={"ID":"f9598cea-e831-47fd-aa1a-08060e23bba2","Type":"ContainerStarted","Data":"79ee8e8028b6aa26c38e11ef05e770e5dd8bbf55490a89dcd0603734b8ebe97e"} Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.910479 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.910410 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" podStartSLOduration=3.61007286 podStartE2EDuration="20.910397174s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:35.326731728 +0000 UTC m=+1235.629246107" lastFinishedPulling="2026-02-02 13:21:52.627056042 +0000 UTC m=+1252.929570421" observedRunningTime="2026-02-02 13:21:54.90617775 +0000 UTC m=+1255.208692139" watchObservedRunningTime="2026-02-02 13:21:54.910397174 +0000 UTC m=+1255.212911583" Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.945723 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.954856 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppnbl"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.979111 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.992679 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dx98r"] Feb 02 13:21:54 crc kubenswrapper[4721]: I0202 13:21:54.996965 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cc4f8f495-dxd8x" podStartSLOduration=12.996943364 podStartE2EDuration="12.996943364s" podCreationTimestamp="2026-02-02 13:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:21:54.983405948 +0000 UTC m=+1255.285920337" watchObservedRunningTime="2026-02-02 13:21:54.996943364 +0000 UTC m=+1255.299457763" Feb 02 13:21:55 crc kubenswrapper[4721]: I0202 13:21:55.016658 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" podStartSLOduration=8.476555862 podStartE2EDuration="21.016637517s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:40.090809531 +0000 UTC m=+1240.393323930" lastFinishedPulling="2026-02-02 13:21:52.630891196 +0000 UTC m=+1252.933405585" observedRunningTime="2026-02-02 13:21:55.005531777 +0000 UTC m=+1255.308046166" watchObservedRunningTime="2026-02-02 13:21:55.016637517 +0000 UTC m=+1255.319151906" Feb 02 13:21:55 crc kubenswrapper[4721]: I0202 13:21:55.063216 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gz9nz"] Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.425353 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b256785d-0ae0-454d-8927-a28668507e06" path="/var/lib/kubelet/pods/b256785d-0ae0-454d-8927-a28668507e06/volumes" Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.426147 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded37b19-4830-4437-9a53-778f826f3582" path="/var/lib/kubelet/pods/ded37b19-4830-4437-9a53-778f826f3582/volumes" Feb 02 13:21:56 crc kubenswrapper[4721]: W0202 13:21:56.872309 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e175d27_fe10_4fb7_9ce6_cb98379357cc.slice/crio-d00d9be33b48072beaaaab4076feea10fe8f0fdbb463271458ca986ec5edd095 WatchSource:0}: Error finding container d00d9be33b48072beaaaab4076feea10fe8f0fdbb463271458ca986ec5edd095: Status 404 returned error can't find the container with id d00d9be33b48072beaaaab4076feea10fe8f0fdbb463271458ca986ec5edd095 Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.931655 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78" event={"ID":"298ac2ef-6edb-40cb-bb92-8a8e039f333b","Type":"ContainerStarted","Data":"13e13d77bd07f43bdb5f2bddb70e54f67e8098f6d29d70c860b3d45acca04514"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.934041 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gz9nz" event={"ID":"a75df612-e3f4-4ea3-bfc8-daceaf59205d","Type":"ContainerStarted","Data":"d326d84b5a85f3af2e8c53c3289b853de962bf85378968674fdf9ea42a704488"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.935606 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"080bfc29-50bc-4ba1-b097-4f5c54586d8c","Type":"ContainerStarted","Data":"01ede81af8b0a6f17de5660f45222af06e54bd3e55bcadb3c130698b8e10a8b0"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.936782 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" event={"ID":"6064a9a4-2316-4bdd-abf1-934e9167528a","Type":"ContainerStarted","Data":"2843497cad5bf287b6c581a67c84e3e6decf3e64aad739945932e5a281ee37ef"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.938399 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4","Type":"ContainerStarted","Data":"0d714d60036ede65d525b7a71c6621f278f9400401ce8e0f30061a5e3a3f3ac2"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.939603 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc071000-a602-4de6-a9bc-1c93b6d58c25","Type":"ContainerStarted","Data":"f06dd476cbb6c5a0d98d77cc9568acefbefff19f14d266eab26513942d1c3774"} Feb 02 13:21:56 crc kubenswrapper[4721]: I0202 13:21:56.940633 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e175d27-fe10-4fb7-9ce6-cb98379357cc","Type":"ContainerStarted","Data":"d00d9be33b48072beaaaab4076feea10fe8f0fdbb463271458ca986ec5edd095"} Feb 02 13:21:59 crc kubenswrapper[4721]: I0202 13:21:59.671012 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:21:59 crc kubenswrapper[4721]: I0202 13:21:59.995280 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"496bb19e-217b-4896-9bee-8082ac5da28b","Type":"ContainerStarted","Data":"77992fe4aaf2cf252e3f8b5179aa81c542c6cc143ebeb5bc6250cbc35937ab79"} Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.002115 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b6dbd607-3fa8-48e0-b420-4e939a47c460","Type":"ContainerStarted","Data":"ee3f7790a31c71e90ec0266b207ae1db8fc80457afd04cb12988c018efe7f723"} Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.006373 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a57cea33-806c-4028-b59f-9f5e65289eac","Type":"ContainerStarted","Data":"c824f81205ecd08e0b369ac8397beb509a4ac36dc83f526d99dfec02aa78a3a3"} Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.010229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d21d961-1540-4610-89c0-ee265f66d728","Type":"ContainerStarted","Data":"562fa5c00d88c0f5830431f80588f0c092e7efe8f3354457564b72a3bf152ac5"} Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.150182 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.246592 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:22:00 crc kubenswrapper[4721]: I0202 13:22:00.246791 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="dnsmasq-dns" containerID="cri-o://e39b433adf3c414a77139ad47d46e6db4a99ca06421ae738ff6dda51a5079476" gracePeriod=10 Feb 02 13:22:01 crc kubenswrapper[4721]: I0202 13:22:01.021284 4721 generic.go:334] "Generic (PLEG): container finished" podID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerID="e39b433adf3c414a77139ad47d46e6db4a99ca06421ae738ff6dda51a5079476" exitCode=0 Feb 02 13:22:01 crc kubenswrapper[4721]: I0202 13:22:01.021419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" event={"ID":"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff","Type":"ContainerDied","Data":"e39b433adf3c414a77139ad47d46e6db4a99ca06421ae738ff6dda51a5079476"} Feb 02 13:22:02 crc kubenswrapper[4721]: I0202 13:22:02.768461 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:22:02 crc kubenswrapper[4721]: I0202 13:22:02.768748 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:22:02 crc kubenswrapper[4721]: I0202 13:22:02.773321 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:22:03 crc kubenswrapper[4721]: I0202 13:22:03.042170 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cc4f8f495-dxd8x" Feb 02 13:22:03 crc kubenswrapper[4721]: I0202 13:22:03.109748 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:22:04 crc kubenswrapper[4721]: I0202 13:22:04.670542 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.441555 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.606175 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp8wm\" (UniqueName: \"kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm\") pod \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.606268 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc\") pod \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.606406 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config\") pod \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\" (UID: \"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff\") " Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.609449 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm" (OuterVolumeSpecName: "kube-api-access-gp8wm") pod "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" (UID: "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff"). InnerVolumeSpecName "kube-api-access-gp8wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.658418 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" (UID: "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.658449 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config" (OuterVolumeSpecName: "config") pod "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" (UID: "242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.711937 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp8wm\" (UniqueName: \"kubernetes.io/projected/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-kube-api-access-gp8wm\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.711969 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:07 crc kubenswrapper[4721]: I0202 13:22:07.711979 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.095659 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" event={"ID":"242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff","Type":"ContainerDied","Data":"f45d2f8df0c32334c973de797a13b02d6fc09764b2e977590919ddc80c08dc8f"} Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.096024 4721 scope.go:117] "RemoveContainer" containerID="e39b433adf3c414a77139ad47d46e6db4a99ca06421ae738ff6dda51a5079476" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.095711 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-h9xj9" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.140801 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.151060 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-h9xj9"] Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.316553 4721 scope.go:117] "RemoveContainer" containerID="3dbaa0b96c9ac5ea0094bc68c273d9176a5f00da8a9565f16566f8f212818281" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.426284 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" path="/var/lib/kubelet/pods/242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff/volumes" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.864502 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hkwkv"] Feb 02 13:22:08 crc kubenswrapper[4721]: E0202 13:22:08.865278 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="init" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.865294 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="init" Feb 02 13:22:08 crc kubenswrapper[4721]: E0202 13:22:08.865336 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="dnsmasq-dns" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.865343 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="dnsmasq-dns" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.865560 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="242f2c9f-2150-4d1a-8c40-9e34f1ffc5ff" containerName="dnsmasq-dns" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.866769 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.869161 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.896473 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hkwkv"] Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947110 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-combined-ca-bundle\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947177 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947212 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovs-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947234 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovn-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947268 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdlw\" (UniqueName: \"kubernetes.io/projected/753a63ae-970e-4dd1-a284-bc3b6027ca64-kube-api-access-4tdlw\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:08 crc kubenswrapper[4721]: I0202 13:22:08.947391 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753a63ae-970e-4dd1-a284-bc3b6027ca64-config\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.049954 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-combined-ca-bundle\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050393 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovs-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050419 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovn-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050465 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdlw\" (UniqueName: \"kubernetes.io/projected/753a63ae-970e-4dd1-a284-bc3b6027ca64-kube-api-access-4tdlw\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050550 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753a63ae-970e-4dd1-a284-bc3b6027ca64-config\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050698 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovs-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.050779 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/753a63ae-970e-4dd1-a284-bc3b6027ca64-ovn-rundir\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.051420 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753a63ae-970e-4dd1-a284-bc3b6027ca64-config\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.056161 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-combined-ca-bundle\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.056720 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/753a63ae-970e-4dd1-a284-bc3b6027ca64-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.071206 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdlw\" (UniqueName: \"kubernetes.io/projected/753a63ae-970e-4dd1-a284-bc3b6027ca64-kube-api-access-4tdlw\") pod \"ovn-controller-metrics-hkwkv\" (UID: \"753a63ae-970e-4dd1-a284-bc3b6027ca64\") " pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.101596 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.103373 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.105989 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.109158 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gz9nz" event={"ID":"a75df612-e3f4-4ea3-bfc8-daceaf59205d","Type":"ContainerStarted","Data":"14cb944d9ebd6f321f4e7337871523268072f36cf383a9853996146747d08a5b"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.112625 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" event={"ID":"6064a9a4-2316-4bdd-abf1-934e9167528a","Type":"ContainerStarted","Data":"db48bb505d8316d1ad58e3c2189e606c5e7d3f9c21ce8628f76ea85096ff65d4"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.113886 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4","Type":"ContainerStarted","Data":"3f181baca9c046b0491dc6e8352b3a4da843071fc3c26260b6a24a6eceb14a93"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.115177 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e175d27-fe10-4fb7-9ce6-cb98379357cc","Type":"ContainerStarted","Data":"320908b034f5fe19814c7c0745afd4b6011c3f3fc46fa9e30c994d6b7cc2fbf0"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.117456 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.152004 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78" event={"ID":"298ac2ef-6edb-40cb-bb92-8a8e039f333b","Type":"ContainerStarted","Data":"363dd78e1c4e5867cf298bb86c64e218deba7be60b4ca20bfbf5998146e7116d"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.153779 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-l5h78" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.161654 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"080bfc29-50bc-4ba1-b097-4f5c54586d8c","Type":"ContainerStarted","Data":"19b4027e8736b74bf49b5b89d775dabe3e8a79fa82676e1e337d9a187eb4727b"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.178382 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a686ac60-f231-4070-98c7-7acbc66c29d5","Type":"ContainerStarted","Data":"bb793056172a633150843ba507e9aeef9f6e37d73e63bc5b790e41225aa4edfb"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.179325 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.189379 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hkwkv" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.190636 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2505bd6b-64d4-4d17-9c1a-0e89562612be","Type":"ContainerStarted","Data":"d12641d1726ea2160227d17b476c80e4ad8727c8ef150b1b1b2de53e6dfe063d"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.196311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc071000-a602-4de6-a9bc-1c93b6d58c25","Type":"ContainerStarted","Data":"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3"} Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.197166 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.220710 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-6lvhx" podStartSLOduration=16.627084138 podStartE2EDuration="27.220683109s" podCreationTimestamp="2026-02-02 13:21:42 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.893462668 +0000 UTC m=+1257.195977057" lastFinishedPulling="2026-02-02 13:22:07.487061639 +0000 UTC m=+1267.789576028" observedRunningTime="2026-02-02 13:22:09.214845761 +0000 UTC m=+1269.517360150" watchObservedRunningTime="2026-02-02 13:22:09.220683109 +0000 UTC m=+1269.523197518" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.256880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.257051 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.257125 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mw6\" (UniqueName: \"kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.257193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.258419 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.668202495 podStartE2EDuration="28.258398219s" podCreationTimestamp="2026-02-02 13:21:41 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.890630082 +0000 UTC m=+1257.193144471" lastFinishedPulling="2026-02-02 13:22:08.480825806 +0000 UTC m=+1268.783340195" observedRunningTime="2026-02-02 13:22:09.243192848 +0000 UTC m=+1269.545707237" watchObservedRunningTime="2026-02-02 13:22:09.258398219 +0000 UTC m=+1269.560912608" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.318859 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-l5h78" podStartSLOduration=13.253537779 podStartE2EDuration="24.318834672s" podCreationTimestamp="2026-02-02 13:21:45 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.890328934 +0000 UTC m=+1257.192843323" lastFinishedPulling="2026-02-02 13:22:07.955625827 +0000 UTC m=+1268.258140216" observedRunningTime="2026-02-02 13:22:09.266685853 +0000 UTC m=+1269.569200262" watchObservedRunningTime="2026-02-02 13:22:09.318834672 +0000 UTC m=+1269.621349061" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.363349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.363580 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.363606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mw6\" (UniqueName: \"kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.363642 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.364914 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.378263 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.380505 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.182385403 podStartE2EDuration="31.380481559s" podCreationTimestamp="2026-02-02 13:21:38 +0000 UTC" firstStartedPulling="2026-02-02 13:21:54.289070606 +0000 UTC m=+1254.591584995" lastFinishedPulling="2026-02-02 13:22:07.487166762 +0000 UTC m=+1267.789681151" observedRunningTime="2026-02-02 13:22:09.334960218 +0000 UTC m=+1269.637474617" watchObservedRunningTime="2026-02-02 13:22:09.380481559 +0000 UTC m=+1269.682995948" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.383285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.427430 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mw6\" (UniqueName: \"kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6\") pod \"dnsmasq-dns-7fd796d7df-8dm9r\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.462539 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.473391 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.490360 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.492778 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.497023 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.512721 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.577285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.577343 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.577382 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.577425 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.577459 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6v4\" (UniqueName: \"kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.679006 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.679413 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.679473 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.679538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.679600 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6v4\" (UniqueName: \"kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.680379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.680799 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.681221 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.682845 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.720770 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6v4\" (UniqueName: \"kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4\") pod \"dnsmasq-dns-86db49b7ff-x9n8f\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.852675 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hkwkv"] Feb 02 13:22:09 crc kubenswrapper[4721]: I0202 13:22:09.915831 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.121805 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.247710 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hkwkv" event={"ID":"753a63ae-970e-4dd1-a284-bc3b6027ca64","Type":"ContainerStarted","Data":"bfdc667102bd1d472804c474addc8c1f40e6e365d9c067acd507ccd135592a72"} Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.250861 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" event={"ID":"ab22a3c1-704b-4bc7-81cb-623e429e619a","Type":"ContainerStarted","Data":"78c99d5eb08266bee26c3f183a5913dabc7aa4cac16d4baa27beddfa99913262"} Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.257186 4721 generic.go:334] "Generic (PLEG): container finished" podID="a75df612-e3f4-4ea3-bfc8-daceaf59205d" containerID="14cb944d9ebd6f321f4e7337871523268072f36cf383a9853996146747d08a5b" exitCode=0 Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.257480 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gz9nz" event={"ID":"a75df612-e3f4-4ea3-bfc8-daceaf59205d","Type":"ContainerDied","Data":"14cb944d9ebd6f321f4e7337871523268072f36cf383a9853996146747d08a5b"} Feb 02 13:22:10 crc kubenswrapper[4721]: I0202 13:22:10.548624 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:10 crc kubenswrapper[4721]: W0202 13:22:10.570991 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b496f0_c99d_43e9_9e8a_03286d8966ab.slice/crio-d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e WatchSource:0}: Error finding container d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e: Status 404 returned error can't find the container with id d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.270164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" event={"ID":"02b496f0-c99d-43e9-9e8a-03286d8966ab","Type":"ContainerStarted","Data":"d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e"} Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.272919 4721 generic.go:334] "Generic (PLEG): container finished" podID="ab22a3c1-704b-4bc7-81cb-623e429e619a" containerID="7462ae5c027f73fc404a31a58cb2b894d96a3e39c3ff01b6eaafb7a0a288d5dc" exitCode=0 Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.273015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" event={"ID":"ab22a3c1-704b-4bc7-81cb-623e429e619a","Type":"ContainerDied","Data":"7462ae5c027f73fc404a31a58cb2b894d96a3e39c3ff01b6eaafb7a0a288d5dc"} Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.277306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gz9nz" event={"ID":"a75df612-e3f4-4ea3-bfc8-daceaf59205d","Type":"ContainerStarted","Data":"c668936653f99869b2a3b3bceecad194fa33cfecf37308e54182c33305d0440c"} Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.689891 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.859646 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config\") pod \"ab22a3c1-704b-4bc7-81cb-623e429e619a\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.859794 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc\") pod \"ab22a3c1-704b-4bc7-81cb-623e429e619a\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.859876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6mw6\" (UniqueName: \"kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6\") pod \"ab22a3c1-704b-4bc7-81cb-623e429e619a\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.859907 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb\") pod \"ab22a3c1-704b-4bc7-81cb-623e429e619a\" (UID: \"ab22a3c1-704b-4bc7-81cb-623e429e619a\") " Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.863934 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6" (OuterVolumeSpecName: "kube-api-access-c6mw6") pod "ab22a3c1-704b-4bc7-81cb-623e429e619a" (UID: "ab22a3c1-704b-4bc7-81cb-623e429e619a"). InnerVolumeSpecName "kube-api-access-c6mw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.881244 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab22a3c1-704b-4bc7-81cb-623e429e619a" (UID: "ab22a3c1-704b-4bc7-81cb-623e429e619a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.884997 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab22a3c1-704b-4bc7-81cb-623e429e619a" (UID: "ab22a3c1-704b-4bc7-81cb-623e429e619a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.886057 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config" (OuterVolumeSpecName: "config") pod "ab22a3c1-704b-4bc7-81cb-623e429e619a" (UID: "ab22a3c1-704b-4bc7-81cb-623e429e619a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.962801 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.962857 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6mw6\" (UniqueName: \"kubernetes.io/projected/ab22a3c1-704b-4bc7-81cb-623e429e619a-kube-api-access-c6mw6\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.962874 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:11 crc kubenswrapper[4721]: I0202 13:22:11.962887 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab22a3c1-704b-4bc7-81cb-623e429e619a-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.288136 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4e175d27-fe10-4fb7-9ce6-cb98379357cc","Type":"ContainerStarted","Data":"9bc96cfe0b5d7259d1047b30aa115867fe936695dc7f12460c53b3e4556e2705"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.290561 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.290540 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8dm9r" event={"ID":"ab22a3c1-704b-4bc7-81cb-623e429e619a","Type":"ContainerDied","Data":"78c99d5eb08266bee26c3f183a5913dabc7aa4cac16d4baa27beddfa99913262"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.290699 4721 scope.go:117] "RemoveContainer" containerID="7462ae5c027f73fc404a31a58cb2b894d96a3e39c3ff01b6eaafb7a0a288d5dc" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.293604 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gz9nz" event={"ID":"a75df612-e3f4-4ea3-bfc8-daceaf59205d","Type":"ContainerStarted","Data":"a880a4453b088ae93b60e019547efc4b43fba24594a2f483ba14d172b0f87fb8"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.293742 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.295829 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"080bfc29-50bc-4ba1-b097-4f5c54586d8c","Type":"ContainerStarted","Data":"788238e13b3a7d4c500208e838cbd379811c90ee8f0d3f7659036aaa10ee1e55"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.297266 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hkwkv" event={"ID":"753a63ae-970e-4dd1-a284-bc3b6027ca64","Type":"ContainerStarted","Data":"35547630b2e0f8b8d2c1eeefe1581ef2ec7f24a6564a1f148ec46816fc888253"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.299320 4721 generic.go:334] "Generic (PLEG): container finished" podID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerID="d0477d4beedf8835ceccc8981c1de2a9fe8aa3519682eb80e7972c0762297343" exitCode=0 Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.299378 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" event={"ID":"02b496f0-c99d-43e9-9e8a-03286d8966ab","Type":"ContainerDied","Data":"d0477d4beedf8835ceccc8981c1de2a9fe8aa3519682eb80e7972c0762297343"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.301857 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerStarted","Data":"f51717e20215c350c32f4c0b374e223fc4a90aedd712f943c01301452ac10dc6"} Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.316823 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.750053739 podStartE2EDuration="25.316801206s" podCreationTimestamp="2026-02-02 13:21:47 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.890835498 +0000 UTC m=+1257.193349887" lastFinishedPulling="2026-02-02 13:22:11.457582965 +0000 UTC m=+1271.760097354" observedRunningTime="2026-02-02 13:22:12.311929944 +0000 UTC m=+1272.614444333" watchObservedRunningTime="2026-02-02 13:22:12.316801206 +0000 UTC m=+1272.619315595" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.343989 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hkwkv" podStartSLOduration=2.74350902 podStartE2EDuration="4.34396725s" podCreationTimestamp="2026-02-02 13:22:08 +0000 UTC" firstStartedPulling="2026-02-02 13:22:09.885233715 +0000 UTC m=+1270.187748104" lastFinishedPulling="2026-02-02 13:22:11.485691935 +0000 UTC m=+1271.788206334" observedRunningTime="2026-02-02 13:22:12.331566145 +0000 UTC m=+1272.634080544" watchObservedRunningTime="2026-02-02 13:22:12.34396725 +0000 UTC m=+1272.646481649" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.357673 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gz9nz" podStartSLOduration=16.608507734 podStartE2EDuration="27.3576538s" podCreationTimestamp="2026-02-02 13:21:45 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.894147197 +0000 UTC m=+1257.196661586" lastFinishedPulling="2026-02-02 13:22:07.643293263 +0000 UTC m=+1267.945807652" observedRunningTime="2026-02-02 13:22:12.355757539 +0000 UTC m=+1272.658271938" watchObservedRunningTime="2026-02-02 13:22:12.3576538 +0000 UTC m=+1272.660168189" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.475549 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.908973605 podStartE2EDuration="30.475518526s" podCreationTimestamp="2026-02-02 13:21:42 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.891036074 +0000 UTC m=+1257.193550463" lastFinishedPulling="2026-02-02 13:22:11.457580995 +0000 UTC m=+1271.760095384" observedRunningTime="2026-02-02 13:22:12.421846325 +0000 UTC m=+1272.724360744" watchObservedRunningTime="2026-02-02 13:22:12.475518526 +0000 UTC m=+1272.778032915" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.518158 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.529199 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8dm9r"] Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.830943 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 13:22:12 crc kubenswrapper[4721]: I0202 13:22:12.878400 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 13:22:13 crc kubenswrapper[4721]: I0202 13:22:13.314409 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" event={"ID":"02b496f0-c99d-43e9-9e8a-03286d8966ab","Type":"ContainerStarted","Data":"6404bb6f1951c91af9f8453984cfaf480f6307bbda26d7f2b85d0b4cee4e2109"} Feb 02 13:22:13 crc kubenswrapper[4721]: I0202 13:22:13.316103 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:22:13 crc kubenswrapper[4721]: I0202 13:22:13.316149 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 13:22:13 crc kubenswrapper[4721]: I0202 13:22:13.343469 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" podStartSLOduration=4.343446352 podStartE2EDuration="4.343446352s" podCreationTimestamp="2026-02-02 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:13.337486281 +0000 UTC m=+1273.640000680" watchObservedRunningTime="2026-02-02 13:22:13.343446352 +0000 UTC m=+1273.645960751" Feb 02 13:22:13 crc kubenswrapper[4721]: I0202 13:22:13.369083 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.101775 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.102207 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.141634 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.257789 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.324111 4721 generic.go:334] "Generic (PLEG): container finished" podID="2505bd6b-64d4-4d17-9c1a-0e89562612be" containerID="d12641d1726ea2160227d17b476c80e4ad8727c8ef150b1b1b2de53e6dfe063d" exitCode=0 Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.324168 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2505bd6b-64d4-4d17-9c1a-0e89562612be","Type":"ContainerDied","Data":"d12641d1726ea2160227d17b476c80e4ad8727c8ef150b1b1b2de53e6dfe063d"} Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.326034 4721 generic.go:334] "Generic (PLEG): container finished" podID="f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4" containerID="3f181baca9c046b0491dc6e8352b3a4da843071fc3c26260b6a24a6eceb14a93" exitCode=0 Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.326114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4","Type":"ContainerDied","Data":"3f181baca9c046b0491dc6e8352b3a4da843071fc3c26260b6a24a6eceb14a93"} Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.326631 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.390214 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.454537 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab22a3c1-704b-4bc7-81cb-623e429e619a" path="/var/lib/kubelet/pods/ab22a3c1-704b-4bc7-81cb-623e429e619a/volumes" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.631763 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:22:14 crc kubenswrapper[4721]: E0202 13:22:14.632358 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab22a3c1-704b-4bc7-81cb-623e429e619a" containerName="init" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.632380 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab22a3c1-704b-4bc7-81cb-623e429e619a" containerName="init" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.632648 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab22a3c1-704b-4bc7-81cb-623e429e619a" containerName="init" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.634204 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.644498 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.645146 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.645292 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.645449 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-96xv8" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.682445 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.748802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749237 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-scripts\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkqr\" (UniqueName: \"kubernetes.io/projected/cd5938c1-e4b9-4437-a379-c25bc5b1c243-kube-api-access-fkkqr\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749483 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749640 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-config\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.749958 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.763649 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.763721 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.763772 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.764649 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.764721 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513" gracePeriod=600 Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852431 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-config\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852478 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852565 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-scripts\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852586 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkqr\" (UniqueName: \"kubernetes.io/projected/cd5938c1-e4b9-4437-a379-c25bc5b1c243-kube-api-access-fkkqr\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852655 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.852705 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.853260 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.853689 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-config\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.853771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd5938c1-e4b9-4437-a379-c25bc5b1c243-scripts\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.857147 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.857227 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.863805 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5938c1-e4b9-4437-a379-c25bc5b1c243-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.881901 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkqr\" (UniqueName: \"kubernetes.io/projected/cd5938c1-e4b9-4437-a379-c25bc5b1c243-kube-api-access-fkkqr\") pod \"ovn-northd-0\" (UID: \"cd5938c1-e4b9-4437-a379-c25bc5b1c243\") " pod="openstack/ovn-northd-0" Feb 02 13:22:14 crc kubenswrapper[4721]: I0202 13:22:14.978741 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.342817 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513" exitCode=0 Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.342864 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513"} Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.342904 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac"} Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.342939 4721 scope.go:117] "RemoveContainer" containerID="3e250ca33160c82bc83b5c1d01cc482ebd55cdb3c1b9ae291d6af786cb617e66" Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.346062 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2505bd6b-64d4-4d17-9c1a-0e89562612be","Type":"ContainerStarted","Data":"830088dbd5ee4718481c9e7cbef3e103bc42082b48115d388f8f6e997fb15bf0"} Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.360367 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4","Type":"ContainerStarted","Data":"25c0fe8095f3ecd0c419a580ee1f9340592817f3c826d9555d60ff4e92044995"} Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.404795 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.416191224 podStartE2EDuration="39.404776862s" podCreationTimestamp="2026-02-02 13:21:36 +0000 UTC" firstStartedPulling="2026-02-02 13:21:56.891148097 +0000 UTC m=+1257.193662496" lastFinishedPulling="2026-02-02 13:22:07.879733755 +0000 UTC m=+1268.182248134" observedRunningTime="2026-02-02 13:22:15.399922131 +0000 UTC m=+1275.702436520" watchObservedRunningTime="2026-02-02 13:22:15.404776862 +0000 UTC m=+1275.707291251" Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.431119 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.948187148 podStartE2EDuration="38.431098954s" podCreationTimestamp="2026-02-02 13:21:37 +0000 UTC" firstStartedPulling="2026-02-02 13:21:54.255263182 +0000 UTC m=+1254.557777571" lastFinishedPulling="2026-02-02 13:22:07.738174988 +0000 UTC m=+1268.040689377" observedRunningTime="2026-02-02 13:22:15.41947489 +0000 UTC m=+1275.721989279" watchObservedRunningTime="2026-02-02 13:22:15.431098954 +0000 UTC m=+1275.733613353" Feb 02 13:22:15 crc kubenswrapper[4721]: I0202 13:22:15.521910 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 13:22:16 crc kubenswrapper[4721]: I0202 13:22:16.386590 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cd5938c1-e4b9-4437-a379-c25bc5b1c243","Type":"ContainerStarted","Data":"90892aafab2664cc738132e88bd477c123af707400f227ca6b329a637aeecfae"} Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.398005 4721 generic.go:334] "Generic (PLEG): container finished" podID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerID="f51717e20215c350c32f4c0b374e223fc4a90aedd712f943c01301452ac10dc6" exitCode=0 Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.398316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerDied","Data":"f51717e20215c350c32f4c0b374e223fc4a90aedd712f943c01301452ac10dc6"} Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.401974 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cd5938c1-e4b9-4437-a379-c25bc5b1c243","Type":"ContainerStarted","Data":"87693f073ac526e5ae970991a9a995206cb8ae50f02087847db9ea2ee617f6e9"} Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.402014 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cd5938c1-e4b9-4437-a379-c25bc5b1c243","Type":"ContainerStarted","Data":"6c33e1641498bc27664c341a6ed9c261e33636435d7ce39409d4a5e987fc1e46"} Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.402124 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.470811 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.396409971 podStartE2EDuration="3.470790029s" podCreationTimestamp="2026-02-02 13:22:14 +0000 UTC" firstStartedPulling="2026-02-02 13:22:15.512609707 +0000 UTC m=+1275.815124096" lastFinishedPulling="2026-02-02 13:22:16.586989765 +0000 UTC m=+1276.889504154" observedRunningTime="2026-02-02 13:22:17.470240955 +0000 UTC m=+1277.772755354" watchObservedRunningTime="2026-02-02 13:22:17.470790029 +0000 UTC m=+1277.773304418" Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.604558 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 13:22:17 crc kubenswrapper[4721]: I0202 13:22:17.604624 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.363051 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.363498 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.917285 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.920949 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.988854 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:22:19 crc kubenswrapper[4721]: I0202 13:22:19.989155 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="dnsmasq-dns" containerID="cri-o://79ee8e8028b6aa26c38e11ef05e770e5dd8bbf55490a89dcd0603734b8ebe97e" gracePeriod=10 Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.054391 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.148367 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.456124 4721 generic.go:334] "Generic (PLEG): container finished" podID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerID="79ee8e8028b6aa26c38e11ef05e770e5dd8bbf55490a89dcd0603734b8ebe97e" exitCode=0 Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.456210 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" event={"ID":"f9598cea-e831-47fd-aa1a-08060e23bba2","Type":"ContainerDied","Data":"79ee8e8028b6aa26c38e11ef05e770e5dd8bbf55490a89dcd0603734b8ebe97e"} Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.589095 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.681013 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config\") pod \"f9598cea-e831-47fd-aa1a-08060e23bba2\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.681247 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc\") pod \"f9598cea-e831-47fd-aa1a-08060e23bba2\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.681283 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpqwx\" (UniqueName: \"kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx\") pod \"f9598cea-e831-47fd-aa1a-08060e23bba2\" (UID: \"f9598cea-e831-47fd-aa1a-08060e23bba2\") " Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.688307 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx" (OuterVolumeSpecName: "kube-api-access-rpqwx") pod "f9598cea-e831-47fd-aa1a-08060e23bba2" (UID: "f9598cea-e831-47fd-aa1a-08060e23bba2"). InnerVolumeSpecName "kube-api-access-rpqwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.732036 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config" (OuterVolumeSpecName: "config") pod "f9598cea-e831-47fd-aa1a-08060e23bba2" (UID: "f9598cea-e831-47fd-aa1a-08060e23bba2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.738556 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9598cea-e831-47fd-aa1a-08060e23bba2" (UID: "f9598cea-e831-47fd-aa1a-08060e23bba2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.784028 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.784086 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9598cea-e831-47fd-aa1a-08060e23bba2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:20 crc kubenswrapper[4721]: I0202 13:22:20.784100 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpqwx\" (UniqueName: \"kubernetes.io/projected/f9598cea-e831-47fd-aa1a-08060e23bba2-kube-api-access-rpqwx\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.383642 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4d7hn"] Feb 02 13:22:21 crc kubenswrapper[4721]: E0202 13:22:21.384161 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="dnsmasq-dns" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.384184 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="dnsmasq-dns" Feb 02 13:22:21 crc kubenswrapper[4721]: E0202 13:22:21.384228 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="init" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.384235 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="init" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.384483 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" containerName="dnsmasq-dns" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.385415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.418554 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4d7hn"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.465445 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-04a7-account-create-update-xhlq8"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.467007 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.470506 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.482747 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" event={"ID":"f9598cea-e831-47fd-aa1a-08060e23bba2","Type":"ContainerDied","Data":"dcd5d79dba084b60aa8e85cba431f52f20415983b8d07817faa29db86eef35ea"} Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.482807 4721 scope.go:117] "RemoveContainer" containerID="79ee8e8028b6aa26c38e11ef05e770e5dd8bbf55490a89dcd0603734b8ebe97e" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.482987 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ljjc5" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.482999 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-04a7-account-create-update-xhlq8"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.496879 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s645b\" (UniqueName: \"kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.497218 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.515516 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.550433 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.553677 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.576929 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.603476 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.603556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknbf\" (UniqueName: \"kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.603606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.603724 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s645b\" (UniqueName: \"kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.604977 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.610057 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.634651 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s645b\" (UniqueName: \"kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b\") pod \"mysqld-exporter-openstack-db-create-4d7hn\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.641265 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ljjc5"] Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.705565 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hmg\" (UniqueName: \"kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.705629 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706054 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706272 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706406 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknbf\" (UniqueName: \"kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706488 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.706513 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.707361 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.718740 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.750354 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknbf\" (UniqueName: \"kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf\") pod \"mysqld-exporter-04a7-account-create-update-xhlq8\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.807903 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.807961 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.808006 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hmg\" (UniqueName: \"kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.808037 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.808215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.812996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.813290 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.813305 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.813445 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.827739 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hmg\" (UniqueName: \"kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg\") pod \"dnsmasq-dns-698758b865-x4f5m\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.832674 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.854431 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 13:22:21 crc kubenswrapper[4721]: I0202 13:22:21.893449 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.425579 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9598cea-e831-47fd-aa1a-08060e23bba2" path="/var/lib/kubelet/pods/f9598cea-e831-47fd-aa1a-08060e23bba2/volumes" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.646830 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.683814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.687686 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.687745 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.687826 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2jt7r" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.687976 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.704684 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728429 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabe6b07-da9d-4980-99b4-12c02640c88d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728530 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-lock\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728551 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728633 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728650 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-cache\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.728666 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9mc9\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-kube-api-access-l9mc9\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830531 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-lock\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830594 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830728 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830757 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-cache\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9mc9\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-kube-api-access-l9mc9\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.830884 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabe6b07-da9d-4980-99b4-12c02640c88d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.831985 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-lock\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.832083 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eabe6b07-da9d-4980-99b4-12c02640c88d-cache\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: E0202 13:22:22.833234 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:22 crc kubenswrapper[4721]: E0202 13:22:22.833348 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:22 crc kubenswrapper[4721]: E0202 13:22:22.833482 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:23.333463615 +0000 UTC m=+1283.635978004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.834609 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.834641 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b744527ddf0b30b1cc5ab21b1766bbfac23b2e50aed8c717fd9a8009cfeccd09/globalmount\"" pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.837546 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabe6b07-da9d-4980-99b4-12c02640c88d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.860988 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9mc9\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-kube-api-access-l9mc9\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:22 crc kubenswrapper[4721]: I0202 13:22:22.877342 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2445c7d5-729e-46d6-8fce-b5ddd9535e1d\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.170714 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4rnrx"] Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.172535 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.175520 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.176738 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.176769 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.198344 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rnrx"] Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.238833 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239031 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239099 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239125 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239276 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239338 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.239450 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rm5\" (UniqueName: \"kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.341245 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.341293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.341324 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.341384 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rm5\" (UniqueName: \"kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.341861 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.342003 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.342031 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.342049 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.342467 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: E0202 13:22:23.342636 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:23 crc kubenswrapper[4721]: E0202 13:22:23.342664 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:23 crc kubenswrapper[4721]: E0202 13:22:23.342723 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:24.342700393 +0000 UTC m=+1284.645214792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.343215 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.343500 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.347515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.348562 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.358133 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rm5\" (UniqueName: \"kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.361341 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf\") pod \"swift-ring-rebalance-4rnrx\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:23 crc kubenswrapper[4721]: I0202 13:22:23.499932 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.360151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:24 crc kubenswrapper[4721]: E0202 13:22:24.360301 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:24 crc kubenswrapper[4721]: E0202 13:22:24.360504 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:24 crc kubenswrapper[4721]: E0202 13:22:24.360555 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:26.360541662 +0000 UTC m=+1286.663056051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.424289 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sl4gx"] Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.425626 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.428058 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sl4gx"] Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.461728 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.462653 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99fz\" (UniqueName: \"kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.519133 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7e-account-create-update-7jmk5"] Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.520533 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.522262 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.535037 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7e-account-create-update-7jmk5"] Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.569665 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75bj\" (UniqueName: \"kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.569854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.570086 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99fz\" (UniqueName: \"kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.570212 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.571279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.595054 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99fz\" (UniqueName: \"kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz\") pod \"glance-db-create-sl4gx\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.672266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.672383 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q75bj\" (UniqueName: \"kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.673006 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.695535 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q75bj\" (UniqueName: \"kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj\") pod \"glance-1d7e-account-create-update-7jmk5\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.751740 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:24 crc kubenswrapper[4721]: I0202 13:22:24.838737 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:25 crc kubenswrapper[4721]: I0202 13:22:25.604287 4721 scope.go:117] "RemoveContainer" containerID="a20ee72f3c2584aa99632a64dbf1cb3b2b0b3ad83db068faf7bb964a8d3d6314" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.146203 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vfbpx"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.148903 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.152352 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.166199 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfbpx"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.210419 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.210483 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dcx\" (UniqueName: \"kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: W0202 13:22:26.217268 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda12cebe8_c719_4841_8d01_e9faf9b745cf.slice/crio-521bca06923fe78f1ba71782b798785e3b87c44a41b5a65d319554abd4047afe WatchSource:0}: Error finding container 521bca06923fe78f1ba71782b798785e3b87c44a41b5a65d319554abd4047afe: Status 404 returned error can't find the container with id 521bca06923fe78f1ba71782b798785e3b87c44a41b5a65d319554abd4047afe Feb 02 13:22:26 crc kubenswrapper[4721]: W0202 13:22:26.219755 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e72b39_6085_4753_8b7d_a93a80c95d49.slice/crio-2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a WatchSource:0}: Error finding container 2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a: Status 404 returned error can't find the container with id 2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.224021 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4d7hn"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.235183 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.312930 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.313186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dcx\" (UniqueName: \"kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.313825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.377669 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dcx\" (UniqueName: \"kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx\") pod \"root-account-create-update-vfbpx\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.420694 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:26 crc kubenswrapper[4721]: E0202 13:22:26.421358 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:26 crc kubenswrapper[4721]: E0202 13:22:26.421377 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:26 crc kubenswrapper[4721]: E0202 13:22:26.421420 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:30.421403109 +0000 UTC m=+1290.723917498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.433326 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sl4gx"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.481335 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.531665 4721 generic.go:334] "Generic (PLEG): container finished" podID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerID="b3aeeebb46496223c552a8ed7c33309ec906c0d30db8ba232bc642182832692e" exitCode=0 Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.531744 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4f5m" event={"ID":"a12cebe8-c719-4841-8d01-e9faf9b745cf","Type":"ContainerDied","Data":"b3aeeebb46496223c552a8ed7c33309ec906c0d30db8ba232bc642182832692e"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.531777 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4f5m" event={"ID":"a12cebe8-c719-4841-8d01-e9faf9b745cf","Type":"ContainerStarted","Data":"521bca06923fe78f1ba71782b798785e3b87c44a41b5a65d319554abd4047afe"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.537564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" event={"ID":"b5e72b39-6085-4753-8b7d-a93a80c95d49","Type":"ContainerStarted","Data":"ac2082a5d3a7b825797912c8d9660423dc0e0e1d5b6ff60e8c46690201c145fc"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.537619 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" event={"ID":"b5e72b39-6085-4753-8b7d-a93a80c95d49","Type":"ContainerStarted","Data":"2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.539936 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4gx" event={"ID":"9d46c6f8-aff0-4b28-a71b-d98a894afdaf","Type":"ContainerStarted","Data":"139d705bb752a02c5df86ebbf850d7ede58efa5094bbbfe34e470f6b2bf2ddc6"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.547481 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerStarted","Data":"ee494f7d068c5f80477a3c22c3a31526d01c4a02a690967dc4ac1b911a158a98"} Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.577084 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" podStartSLOduration=5.577049147 podStartE2EDuration="5.577049147s" podCreationTimestamp="2026-02-02 13:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:26.566287456 +0000 UTC m=+1286.868801845" watchObservedRunningTime="2026-02-02 13:22:26.577049147 +0000 UTC m=+1286.879563536" Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.631472 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-04a7-account-create-update-xhlq8"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.657500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rnrx"] Feb 02 13:22:26 crc kubenswrapper[4721]: I0202 13:22:26.667428 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7e-account-create-update-7jmk5"] Feb 02 13:22:26 crc kubenswrapper[4721]: W0202 13:22:26.681629 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ef9e5_26ab_4b7b_b255_73968ed867ce.slice/crio-9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f WatchSource:0}: Error finding container 9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f: Status 404 returned error can't find the container with id 9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f Feb 02 13:22:26 crc kubenswrapper[4721]: W0202 13:22:26.687613 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd1f15d5_77dc_4b6d_81bf_c2a8286da820.slice/crio-75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812 WatchSource:0}: Error finding container 75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812: Status 404 returned error can't find the container with id 75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.030384 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfbpx"] Feb 02 13:22:27 crc kubenswrapper[4721]: W0202 13:22:27.031121 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc47d165e_de4e_4f3a_8f66_4dab149c7b5e.slice/crio-d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13 WatchSource:0}: Error finding container d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13: Status 404 returned error can't find the container with id d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.559233 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rnrx" event={"ID":"bd1f15d5-77dc-4b6d-81bf-c2a8286da820","Type":"ContainerStarted","Data":"75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.561657 4721 generic.go:334] "Generic (PLEG): container finished" podID="c47d165e-de4e-4f3a-8f66-4dab149c7b5e" containerID="c8ac03b5a6a963dc432f18a1012252ac15cbdaf5a852eb90b3130207aa267b95" exitCode=0 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.561742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfbpx" event={"ID":"c47d165e-de4e-4f3a-8f66-4dab149c7b5e","Type":"ContainerDied","Data":"c8ac03b5a6a963dc432f18a1012252ac15cbdaf5a852eb90b3130207aa267b95"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.561772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfbpx" event={"ID":"c47d165e-de4e-4f3a-8f66-4dab149c7b5e","Type":"ContainerStarted","Data":"d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.565501 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4f5m" event={"ID":"a12cebe8-c719-4841-8d01-e9faf9b745cf","Type":"ContainerStarted","Data":"76fa42f68bb28a61e9cdddef88da78612bb32973f43a836c40f835e5ed6d0856"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.565579 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.568490 4721 generic.go:334] "Generic (PLEG): container finished" podID="8b74e699-bc4f-4415-a9dc-8ad52d916bc0" containerID="fa62cc31d8fc9109f8c7236f7067b1ae22093077c72a6872a6dc77d5cf6674c5" exitCode=0 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.568547 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" event={"ID":"8b74e699-bc4f-4415-a9dc-8ad52d916bc0","Type":"ContainerDied","Data":"fa62cc31d8fc9109f8c7236f7067b1ae22093077c72a6872a6dc77d5cf6674c5"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.568613 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" event={"ID":"8b74e699-bc4f-4415-a9dc-8ad52d916bc0","Type":"ContainerStarted","Data":"ac7c59871bbb981b1f42db9994a76f1de10fc29eb7b41e289dce1c711d0abf41"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.570281 4721 generic.go:334] "Generic (PLEG): container finished" podID="b5e72b39-6085-4753-8b7d-a93a80c95d49" containerID="ac2082a5d3a7b825797912c8d9660423dc0e0e1d5b6ff60e8c46690201c145fc" exitCode=0 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.570351 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" event={"ID":"b5e72b39-6085-4753-8b7d-a93a80c95d49","Type":"ContainerDied","Data":"ac2082a5d3a7b825797912c8d9660423dc0e0e1d5b6ff60e8c46690201c145fc"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.572653 4721 generic.go:334] "Generic (PLEG): container finished" podID="4e1ef9e5-26ab-4b7b-b255-73968ed867ce" containerID="c036ed84a2cba404110a4db04b8c7d0f021199196a70d367772128ca1a327056" exitCode=0 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.572702 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7e-account-create-update-7jmk5" event={"ID":"4e1ef9e5-26ab-4b7b-b255-73968ed867ce","Type":"ContainerDied","Data":"c036ed84a2cba404110a4db04b8c7d0f021199196a70d367772128ca1a327056"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.572723 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7e-account-create-update-7jmk5" event={"ID":"4e1ef9e5-26ab-4b7b-b255-73968ed867ce","Type":"ContainerStarted","Data":"9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.574165 4721 generic.go:334] "Generic (PLEG): container finished" podID="9d46c6f8-aff0-4b28-a71b-d98a894afdaf" containerID="4fba6951bb982a13a5360303ec96e98896da4f493023d6f3bda466f64f4a3da5" exitCode=0 Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.574255 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4gx" event={"ID":"9d46c6f8-aff0-4b28-a71b-d98a894afdaf","Type":"ContainerDied","Data":"4fba6951bb982a13a5360303ec96e98896da4f493023d6f3bda466f64f4a3da5"} Feb 02 13:22:27 crc kubenswrapper[4721]: I0202 13:22:27.677541 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-x4f5m" podStartSLOduration=6.677515599 podStartE2EDuration="6.677515599s" podCreationTimestamp="2026-02-02 13:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:27.648492925 +0000 UTC m=+1287.951007324" watchObservedRunningTime="2026-02-02 13:22:27.677515599 +0000 UTC m=+1287.980029998" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.166722 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-679d56c757-8hcnt" podUID="1e715356-9848-439f-a13d-eb00f34521ec" containerName="console" containerID="cri-o://2105e396598b1fd13640d6e576494052ae1ae901f42a2b1dd0e7f495d1eec506" gracePeriod=15 Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.587336 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-679d56c757-8hcnt_1e715356-9848-439f-a13d-eb00f34521ec/console/0.log" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.587427 4721 generic.go:334] "Generic (PLEG): container finished" podID="1e715356-9848-439f-a13d-eb00f34521ec" containerID="2105e396598b1fd13640d6e576494052ae1ae901f42a2b1dd0e7f495d1eec506" exitCode=2 Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.587598 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679d56c757-8hcnt" event={"ID":"1e715356-9848-439f-a13d-eb00f34521ec","Type":"ContainerDied","Data":"2105e396598b1fd13640d6e576494052ae1ae901f42a2b1dd0e7f495d1eec506"} Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.862900 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4msnh"] Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.864623 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.875758 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4msnh"] Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.973380 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e588-account-create-update-4crm9"] Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.975052 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.979303 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.984954 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e588-account-create-update-4crm9"] Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.995193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzx4c\" (UniqueName: \"kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:28 crc kubenswrapper[4721]: I0202 13:22:28.995270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.096810 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.096962 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpscl\" (UniqueName: \"kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.097044 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzx4c\" (UniqueName: \"kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.097103 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.097716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.129949 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzx4c\" (UniqueName: \"kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c\") pod \"keystone-db-create-4msnh\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.201636 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.201856 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpscl\" (UniqueName: \"kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.202994 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.203099 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.245870 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b945b"] Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.247214 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpscl\" (UniqueName: \"kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl\") pod \"keystone-e588-account-create-update-4crm9\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.250297 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.291713 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b945b"] Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.300866 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.329601 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f5fd-account-create-update-8s8md"] Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.331166 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.335392 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.339883 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5fd-account-create-update-8s8md"] Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.406615 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.407100 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmwlz\" (UniqueName: \"kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.509794 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfn7\" (UniqueName: \"kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.510209 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.510613 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmwlz\" (UniqueName: \"kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.510949 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.513536 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.526927 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmwlz\" (UniqueName: \"kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz\") pod \"placement-db-create-b945b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.601340 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerStarted","Data":"cac1e1e212591677852178e9111f4a55b1b7ae8d7ed3f2354152657d155f18e1"} Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.613663 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfn7\" (UniqueName: \"kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.613753 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.614945 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.621699 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b945b" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.633161 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfn7\" (UniqueName: \"kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7\") pod \"placement-f5fd-account-create-update-8s8md\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:29 crc kubenswrapper[4721]: I0202 13:22:29.668453 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.432912 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:30 crc kubenswrapper[4721]: E0202 13:22:30.433461 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:30 crc kubenswrapper[4721]: E0202 13:22:30.433478 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:30 crc kubenswrapper[4721]: E0202 13:22:30.433521 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:38.43350631 +0000 UTC m=+1298.736020699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.529526 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.574099 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.576135 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.594437 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.622404 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.626890 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4gx" event={"ID":"9d46c6f8-aff0-4b28-a71b-d98a894afdaf","Type":"ContainerDied","Data":"139d705bb752a02c5df86ebbf850d7ede58efa5094bbbfe34e470f6b2bf2ddc6"} Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.626914 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4gx" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.626933 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139d705bb752a02c5df86ebbf850d7ede58efa5094bbbfe34e470f6b2bf2ddc6" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.628827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfbpx" event={"ID":"c47d165e-de4e-4f3a-8f66-4dab149c7b5e","Type":"ContainerDied","Data":"d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13"} Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.628857 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfbpx" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.628867 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d783d0e7af03e0407d90429cf0a634a80162978655ab4c2268c94981bd9e9d13" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.633791 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" event={"ID":"8b74e699-bc4f-4415-a9dc-8ad52d916bc0","Type":"ContainerDied","Data":"ac7c59871bbb981b1f42db9994a76f1de10fc29eb7b41e289dce1c711d0abf41"} Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.633856 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7c59871bbb981b1f42db9994a76f1de10fc29eb7b41e289dce1c711d0abf41" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.633920 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-04a7-account-create-update-xhlq8" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.639440 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknbf\" (UniqueName: \"kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf\") pod \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.639516 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts\") pod \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\" (UID: \"8b74e699-bc4f-4415-a9dc-8ad52d916bc0\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.639549 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" event={"ID":"b5e72b39-6085-4753-8b7d-a93a80c95d49","Type":"ContainerDied","Data":"2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a"} Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.639577 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a68a42d8895510ad7ca8c635add6a94961e7a340ade2d4390e22073b385345a" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.639628 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-4d7hn" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.640925 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b74e699-bc4f-4415-a9dc-8ad52d916bc0" (UID: "8b74e699-bc4f-4415-a9dc-8ad52d916bc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.643135 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7e-account-create-update-7jmk5" event={"ID":"4e1ef9e5-26ab-4b7b-b255-73968ed867ce","Type":"ContainerDied","Data":"9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f"} Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.643175 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a07e59cfbbbd47e4cc366cbab15fadcfeb5402edac13e46b25d05b3feb38e9f" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.643232 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7e-account-create-update-7jmk5" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.650263 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf" (OuterVolumeSpecName: "kube-api-access-wknbf") pod "8b74e699-bc4f-4415-a9dc-8ad52d916bc0" (UID: "8b74e699-bc4f-4415-a9dc-8ad52d916bc0"). InnerVolumeSpecName "kube-api-access-wknbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742117 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts\") pod \"b5e72b39-6085-4753-8b7d-a93a80c95d49\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742447 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dcx\" (UniqueName: \"kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx\") pod \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742564 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5e72b39-6085-4753-8b7d-a93a80c95d49" (UID: "b5e72b39-6085-4753-8b7d-a93a80c95d49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742677 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts\") pod \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\" (UID: \"c47d165e-de4e-4f3a-8f66-4dab149c7b5e\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q75bj\" (UniqueName: \"kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj\") pod \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742793 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts\") pod \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742874 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s645b\" (UniqueName: \"kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b\") pod \"b5e72b39-6085-4753-8b7d-a93a80c95d49\" (UID: \"b5e72b39-6085-4753-8b7d-a93a80c95d49\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n99fz\" (UniqueName: \"kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz\") pod \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\" (UID: \"9d46c6f8-aff0-4b28-a71b-d98a894afdaf\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.742945 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts\") pod \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\" (UID: \"4e1ef9e5-26ab-4b7b-b255-73968ed867ce\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.743903 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e1ef9e5-26ab-4b7b-b255-73968ed867ce" (UID: "4e1ef9e5-26ab-4b7b-b255-73968ed867ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.744305 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c47d165e-de4e-4f3a-8f66-4dab149c7b5e" (UID: "c47d165e-de4e-4f3a-8f66-4dab149c7b5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.744355 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknbf\" (UniqueName: \"kubernetes.io/projected/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-kube-api-access-wknbf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.744389 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e72b39-6085-4753-8b7d-a93a80c95d49-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.744398 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b74e699-bc4f-4415-a9dc-8ad52d916bc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.745459 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d46c6f8-aff0-4b28-a71b-d98a894afdaf" (UID: "9d46c6f8-aff0-4b28-a71b-d98a894afdaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.748637 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj" (OuterVolumeSpecName: "kube-api-access-q75bj") pod "4e1ef9e5-26ab-4b7b-b255-73968ed867ce" (UID: "4e1ef9e5-26ab-4b7b-b255-73968ed867ce"). InnerVolumeSpecName "kube-api-access-q75bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.750851 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx" (OuterVolumeSpecName: "kube-api-access-p6dcx") pod "c47d165e-de4e-4f3a-8f66-4dab149c7b5e" (UID: "c47d165e-de4e-4f3a-8f66-4dab149c7b5e"). InnerVolumeSpecName "kube-api-access-p6dcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.750897 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz" (OuterVolumeSpecName: "kube-api-access-n99fz") pod "9d46c6f8-aff0-4b28-a71b-d98a894afdaf" (UID: "9d46c6f8-aff0-4b28-a71b-d98a894afdaf"). InnerVolumeSpecName "kube-api-access-n99fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.750941 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b" (OuterVolumeSpecName: "kube-api-access-s645b") pod "b5e72b39-6085-4753-8b7d-a93a80c95d49" (UID: "b5e72b39-6085-4753-8b7d-a93a80c95d49"). InnerVolumeSpecName "kube-api-access-s645b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.790290 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-679d56c757-8hcnt_1e715356-9848-439f-a13d-eb00f34521ec/console/0.log" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.790389 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849143 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dcx\" (UniqueName: \"kubernetes.io/projected/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-kube-api-access-p6dcx\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849173 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47d165e-de4e-4f3a-8f66-4dab149c7b5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849183 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q75bj\" (UniqueName: \"kubernetes.io/projected/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-kube-api-access-q75bj\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849192 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849201 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s645b\" (UniqueName: \"kubernetes.io/projected/b5e72b39-6085-4753-8b7d-a93a80c95d49-kube-api-access-s645b\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849211 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n99fz\" (UniqueName: \"kubernetes.io/projected/9d46c6f8-aff0-4b28-a71b-d98a894afdaf-kube-api-access-n99fz\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.849219 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e1ef9e5-26ab-4b7b-b255-73968ed867ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950369 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xhnw\" (UniqueName: \"kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950446 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950492 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950693 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950816 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.950870 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert\") pod \"1e715356-9848-439f-a13d-eb00f34521ec\" (UID: \"1e715356-9848-439f-a13d-eb00f34521ec\") " Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.955956 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca" (OuterVolumeSpecName: "service-ca") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.956775 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.956797 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config" (OuterVolumeSpecName: "console-config") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.956804 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.961716 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw" (OuterVolumeSpecName: "kube-api-access-7xhnw") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "kube-api-access-7xhnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.961730 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:30 crc kubenswrapper[4721]: I0202 13:22:30.963287 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1e715356-9848-439f-a13d-eb00f34521ec" (UID: "1e715356-9848-439f-a13d-eb00f34521ec"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054329 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054363 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054374 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054383 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054392 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xhnw\" (UniqueName: \"kubernetes.io/projected/1e715356-9848-439f-a13d-eb00f34521ec-kube-api-access-7xhnw\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054404 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e715356-9848-439f-a13d-eb00f34521ec-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.054416 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e715356-9848-439f-a13d-eb00f34521ec-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.168578 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5fd-account-create-update-8s8md"] Feb 02 13:22:31 crc kubenswrapper[4721]: W0202 13:22:31.183146 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8261a2f3_c66a_441c_9fc6_a7a6a744b8a3.slice/crio-5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74 WatchSource:0}: Error finding container 5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74: Status 404 returned error can't find the container with id 5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74 Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.184371 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e588-account-create-update-4crm9"] Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.199761 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b945b"] Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.207211 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4msnh"] Feb 02 13:22:31 crc kubenswrapper[4721]: W0202 13:22:31.222980 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd51234ae_bf99_49bc_a3bc_1b392f993726.slice/crio-3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca WatchSource:0}: Error finding container 3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca: Status 404 returned error can't find the container with id 3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.656014 4721 generic.go:334] "Generic (PLEG): container finished" podID="a57cea33-806c-4028-b59f-9f5e65289eac" containerID="c824f81205ecd08e0b369ac8397beb509a4ac36dc83f526d99dfec02aa78a3a3" exitCode=0 Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.656116 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a57cea33-806c-4028-b59f-9f5e65289eac","Type":"ContainerDied","Data":"c824f81205ecd08e0b369ac8397beb509a4ac36dc83f526d99dfec02aa78a3a3"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.664663 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e588-account-create-update-4crm9" event={"ID":"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3","Type":"ContainerStarted","Data":"041fca898cdc3bff357d1c5b88b2d8189fb9511b9b78d7578e5238edecd243a2"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.664701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e588-account-create-update-4crm9" event={"ID":"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3","Type":"ContainerStarted","Data":"5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.669194 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4msnh" event={"ID":"d51234ae-bf99-49bc-a3bc-1b392f993726","Type":"ContainerStarted","Data":"2e42426b62e4be73df9f47cc7c9f8475c22ad72a27299b42b9cb2460af93ca8b"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.669234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4msnh" event={"ID":"d51234ae-bf99-49bc-a3bc-1b392f993726","Type":"ContainerStarted","Data":"3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.672383 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b945b" event={"ID":"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b","Type":"ContainerStarted","Data":"17ddfbe07f4d8bd38ac75c2dd4cd30a97224663bc355d41b2756657171654039"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.672425 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b945b" event={"ID":"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b","Type":"ContainerStarted","Data":"9e99bc7a71cde3a1726dccb420d69877ac39ece8a909aee37a1f367da53f1236"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.678944 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-679d56c757-8hcnt_1e715356-9848-439f-a13d-eb00f34521ec/console/0.log" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.679024 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679d56c757-8hcnt" event={"ID":"1e715356-9848-439f-a13d-eb00f34521ec","Type":"ContainerDied","Data":"d99eec54d3234b8ee9ca1f4e6b988bce26f945deba80a7904e833116e7ebcfe1"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.679079 4721 scope.go:117] "RemoveContainer" containerID="2105e396598b1fd13640d6e576494052ae1ae901f42a2b1dd0e7f495d1eec506" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.679250 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679d56c757-8hcnt" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.686836 4721 generic.go:334] "Generic (PLEG): container finished" podID="b6dbd607-3fa8-48e0-b420-4e939a47c460" containerID="ee3f7790a31c71e90ec0266b207ae1db8fc80457afd04cb12988c018efe7f723" exitCode=0 Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.686925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b6dbd607-3fa8-48e0-b420-4e939a47c460","Type":"ContainerDied","Data":"ee3f7790a31c71e90ec0266b207ae1db8fc80457afd04cb12988c018efe7f723"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.697894 4721 generic.go:334] "Generic (PLEG): container finished" podID="496bb19e-217b-4896-9bee-8082ac5da28b" containerID="77992fe4aaf2cf252e3f8b5179aa81c542c6cc143ebeb5bc6250cbc35937ab79" exitCode=0 Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.697985 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"496bb19e-217b-4896-9bee-8082ac5da28b","Type":"ContainerDied","Data":"77992fe4aaf2cf252e3f8b5179aa81c542c6cc143ebeb5bc6250cbc35937ab79"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.716454 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5fd-account-create-update-8s8md" event={"ID":"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd","Type":"ContainerStarted","Data":"ce2892d88a036a25fa1c546c389b3e2c80e44cee3ffdb13ac0e6fe0ce93c414d"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.716550 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5fd-account-create-update-8s8md" event={"ID":"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd","Type":"ContainerStarted","Data":"0a13c2c88a06c50133a2555869f297c8c47bf2195f08e760fef3c19438536755"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.755348 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rnrx" event={"ID":"bd1f15d5-77dc-4b6d-81bf-c2a8286da820","Type":"ContainerStarted","Data":"6cc2ebc0d6497a1bebde8bd624dd9a320b42cd5a52feffbd34983448a9d2b4a3"} Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.785987 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4msnh" podStartSLOduration=3.785963306 podStartE2EDuration="3.785963306s" podCreationTimestamp="2026-02-02 13:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:31.76653198 +0000 UTC m=+1292.069046369" watchObservedRunningTime="2026-02-02 13:22:31.785963306 +0000 UTC m=+1292.088477695" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.800270 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e588-account-create-update-4crm9" podStartSLOduration=3.800251592 podStartE2EDuration="3.800251592s" podCreationTimestamp="2026-02-02 13:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:31.788502335 +0000 UTC m=+1292.091016734" watchObservedRunningTime="2026-02-02 13:22:31.800251592 +0000 UTC m=+1292.102765981" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.811582 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-b945b" podStartSLOduration=2.811562647 podStartE2EDuration="2.811562647s" podCreationTimestamp="2026-02-02 13:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:31.807888029 +0000 UTC m=+1292.110402428" watchObservedRunningTime="2026-02-02 13:22:31.811562647 +0000 UTC m=+1292.114077046" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.846600 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f5fd-account-create-update-8s8md" podStartSLOduration=2.846575564 podStartE2EDuration="2.846575564s" podCreationTimestamp="2026-02-02 13:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:31.842105183 +0000 UTC m=+1292.144619572" watchObservedRunningTime="2026-02-02 13:22:31.846575564 +0000 UTC m=+1292.149089953" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.894612 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4rnrx" podStartSLOduration=5.215218677 podStartE2EDuration="8.894590983s" podCreationTimestamp="2026-02-02 13:22:23 +0000 UTC" firstStartedPulling="2026-02-02 13:22:26.720275419 +0000 UTC m=+1287.022789808" lastFinishedPulling="2026-02-02 13:22:30.399647725 +0000 UTC m=+1290.702162114" observedRunningTime="2026-02-02 13:22:31.888154849 +0000 UTC m=+1292.190669258" watchObservedRunningTime="2026-02-02 13:22:31.894590983 +0000 UTC m=+1292.197105382" Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.963357 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:22:31 crc kubenswrapper[4721]: I0202 13:22:31.975512 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-679d56c757-8hcnt"] Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.420620 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e715356-9848-439f-a13d-eb00f34521ec" path="/var/lib/kubelet/pods/1e715356-9848-439f-a13d-eb00f34521ec/volumes" Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.730623 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vfbpx"] Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.741150 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vfbpx"] Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.782651 4721 generic.go:334] "Generic (PLEG): container finished" podID="e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" containerID="17ddfbe07f4d8bd38ac75c2dd4cd30a97224663bc355d41b2756657171654039" exitCode=0 Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.782774 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b945b" event={"ID":"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b","Type":"ContainerDied","Data":"17ddfbe07f4d8bd38ac75c2dd4cd30a97224663bc355d41b2756657171654039"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.787997 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" containerID="ce2892d88a036a25fa1c546c389b3e2c80e44cee3ffdb13ac0e6fe0ce93c414d" exitCode=0 Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.788051 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5fd-account-create-update-8s8md" event={"ID":"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd","Type":"ContainerDied","Data":"ce2892d88a036a25fa1c546c389b3e2c80e44cee3ffdb13ac0e6fe0ce93c414d"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.790611 4721 generic.go:334] "Generic (PLEG): container finished" podID="4d21d961-1540-4610-89c0-ee265f66d728" containerID="562fa5c00d88c0f5830431f80588f0c092e7efe8f3354457564b72a3bf152ac5" exitCode=0 Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.790667 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d21d961-1540-4610-89c0-ee265f66d728","Type":"ContainerDied","Data":"562fa5c00d88c0f5830431f80588f0c092e7efe8f3354457564b72a3bf152ac5"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.794566 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"496bb19e-217b-4896-9bee-8082ac5da28b","Type":"ContainerStarted","Data":"872bd030dc963253a68ebeb270b4c0a194dbd3eca5e9c121eb2ad50da1358c6e"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.794853 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.797868 4721 generic.go:334] "Generic (PLEG): container finished" podID="8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" containerID="041fca898cdc3bff357d1c5b88b2d8189fb9511b9b78d7578e5238edecd243a2" exitCode=0 Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.798100 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e588-account-create-update-4crm9" event={"ID":"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3","Type":"ContainerDied","Data":"041fca898cdc3bff357d1c5b88b2d8189fb9511b9b78d7578e5238edecd243a2"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.800964 4721 generic.go:334] "Generic (PLEG): container finished" podID="d51234ae-bf99-49bc-a3bc-1b392f993726" containerID="2e42426b62e4be73df9f47cc7c9f8475c22ad72a27299b42b9cb2460af93ca8b" exitCode=0 Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.801021 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4msnh" event={"ID":"d51234ae-bf99-49bc-a3bc-1b392f993726","Type":"ContainerDied","Data":"2e42426b62e4be73df9f47cc7c9f8475c22ad72a27299b42b9cb2460af93ca8b"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.804491 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b6dbd607-3fa8-48e0-b420-4e939a47c460","Type":"ContainerStarted","Data":"151f885add37f297ff13c58dc930184ef33697a0f39d1cd7915f4828c211c2f2"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.804712 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.809021 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a57cea33-806c-4028-b59f-9f5e65289eac","Type":"ContainerStarted","Data":"8c7f16314c0b8e5bcf446ede68e5cc86f001da0978108c61158b336f333031d2"} Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.809406 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.840588 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.570093647 podStartE2EDuration="58.840565298s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:54.331673318 +0000 UTC m=+1254.634187707" lastFinishedPulling="2026-02-02 13:21:57.602144969 +0000 UTC m=+1257.904659358" observedRunningTime="2026-02-02 13:22:32.830122286 +0000 UTC m=+1293.132636675" watchObservedRunningTime="2026-02-02 13:22:32.840565298 +0000 UTC m=+1293.143079707" Feb 02 13:22:32 crc kubenswrapper[4721]: I0202 13:22:32.965609 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.783106792 podStartE2EDuration="58.965591009s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:53.427296118 +0000 UTC m=+1253.729810507" lastFinishedPulling="2026-02-02 13:21:57.609780335 +0000 UTC m=+1257.912294724" observedRunningTime="2026-02-02 13:22:32.965377672 +0000 UTC m=+1293.267892081" watchObservedRunningTime="2026-02-02 13:22:32.965591009 +0000 UTC m=+1293.268105418" Feb 02 13:22:33 crc kubenswrapper[4721]: I0202 13:22:33.055894 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=55.738103501 podStartE2EDuration="59.055866949s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:54.283774794 +0000 UTC m=+1254.586289183" lastFinishedPulling="2026-02-02 13:21:57.601538242 +0000 UTC m=+1257.904052631" observedRunningTime="2026-02-02 13:22:33.013802902 +0000 UTC m=+1293.316317291" watchObservedRunningTime="2026-02-02 13:22:33.055866949 +0000 UTC m=+1293.358381348" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.434669 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47d165e-de4e-4f3a-8f66-4dab149c7b5e" path="/var/lib/kubelet/pods/c47d165e-de4e-4f3a-8f66-4dab149c7b5e/volumes" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.489926 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.515412 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.580388 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.669768 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzx4c\" (UniqueName: \"kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c\") pod \"d51234ae-bf99-49bc-a3bc-1b392f993726\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.669969 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts\") pod \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.670102 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpscl\" (UniqueName: \"kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl\") pod \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\" (UID: \"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.670144 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts\") pod \"d51234ae-bf99-49bc-a3bc-1b392f993726\" (UID: \"d51234ae-bf99-49bc-a3bc-1b392f993726\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.671087 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d51234ae-bf99-49bc-a3bc-1b392f993726" (UID: "d51234ae-bf99-49bc-a3bc-1b392f993726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.671225 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" (UID: "8261a2f3-c66a-441c-9fc6-a7a6a744b8a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.671591 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts\") pod \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.672275 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" (UID: "3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.674810 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl" (OuterVolumeSpecName: "kube-api-access-kpscl") pod "8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" (UID: "8261a2f3-c66a-441c-9fc6-a7a6a744b8a3"). InnerVolumeSpecName "kube-api-access-kpscl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.676484 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d51234ae-bf99-49bc-a3bc-1b392f993726-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.676515 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.676531 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.678136 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c" (OuterVolumeSpecName: "kube-api-access-gzx4c") pod "d51234ae-bf99-49bc-a3bc-1b392f993726" (UID: "d51234ae-bf99-49bc-a3bc-1b392f993726"). InnerVolumeSpecName "kube-api-access-gzx4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.745886 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b945b" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.777324 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfn7\" (UniqueName: \"kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7\") pod \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\" (UID: \"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.777404 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts\") pod \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.777556 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmwlz\" (UniqueName: \"kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz\") pod \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\" (UID: \"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b\") " Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.778038 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpscl\" (UniqueName: \"kubernetes.io/projected/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3-kube-api-access-kpscl\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.778058 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzx4c\" (UniqueName: \"kubernetes.io/projected/d51234ae-bf99-49bc-a3bc-1b392f993726-kube-api-access-gzx4c\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.781799 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" (UID: "e071a9e9-d1fa-41c2-a0b4-3ddc2470055b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.782273 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz" (OuterVolumeSpecName: "kube-api-access-mmwlz") pod "e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" (UID: "e071a9e9-d1fa-41c2-a0b4-3ddc2470055b"). InnerVolumeSpecName "kube-api-access-mmwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.787454 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7" (OuterVolumeSpecName: "kube-api-access-8lfn7") pod "3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" (UID: "3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd"). InnerVolumeSpecName "kube-api-access-8lfn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.787676 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hrqtc"] Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788320 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51234ae-bf99-49bc-a3bc-1b392f993726" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788347 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51234ae-bf99-49bc-a3bc-1b392f993726" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788369 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ef9e5-26ab-4b7b-b255-73968ed867ce" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788380 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ef9e5-26ab-4b7b-b255-73968ed867ce" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788397 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b74e699-bc4f-4415-a9dc-8ad52d916bc0" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788405 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74e699-bc4f-4415-a9dc-8ad52d916bc0" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788421 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e715356-9848-439f-a13d-eb00f34521ec" containerName="console" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788429 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e715356-9848-439f-a13d-eb00f34521ec" containerName="console" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788451 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e72b39-6085-4753-8b7d-a93a80c95d49" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788475 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e72b39-6085-4753-8b7d-a93a80c95d49" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788490 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788498 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788515 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d46c6f8-aff0-4b28-a71b-d98a894afdaf" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788523 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d46c6f8-aff0-4b28-a71b-d98a894afdaf" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788533 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47d165e-de4e-4f3a-8f66-4dab149c7b5e" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788540 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47d165e-de4e-4f3a-8f66-4dab149c7b5e" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788555 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788563 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: E0202 13:22:34.788572 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788581 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788817 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788833 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788844 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788864 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51234ae-bf99-49bc-a3bc-1b392f993726" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788876 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d46c6f8-aff0-4b28-a71b-d98a894afdaf" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788887 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e72b39-6085-4753-8b7d-a93a80c95d49" containerName="mariadb-database-create" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788904 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e715356-9848-439f-a13d-eb00f34521ec" containerName="console" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788917 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ef9e5-26ab-4b7b-b255-73968ed867ce" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788928 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b74e699-bc4f-4415-a9dc-8ad52d916bc0" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.788941 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47d165e-de4e-4f3a-8f66-4dab149c7b5e" containerName="mariadb-account-create-update" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.789799 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.792265 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f7kg2" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.792540 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.838024 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hrqtc"] Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.854600 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d21d961-1540-4610-89c0-ee265f66d728","Type":"ContainerStarted","Data":"3daccf3c85c15ee03cbfeaf49005630c3a330eee6f6b7cb983a9433547468750"} Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.855244 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.860234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e588-account-create-update-4crm9" event={"ID":"8261a2f3-c66a-441c-9fc6-a7a6a744b8a3","Type":"ContainerDied","Data":"5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74"} Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.860276 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e42e28e547417761133e79bed203e5b34892ed4d47a98b44e202ab8a75a5f74" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.860341 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e588-account-create-update-4crm9" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.875962 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4msnh" event={"ID":"d51234ae-bf99-49bc-a3bc-1b392f993726","Type":"ContainerDied","Data":"3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca"} Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.876005 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3241884a94bec9f4ab719e5306750fbea8f5c6d76bdd6847e89608ef5d722eca" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.876171 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4msnh" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888123 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888326 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rlj\" (UniqueName: \"kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888406 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888605 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888870 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfn7\" (UniqueName: \"kubernetes.io/projected/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd-kube-api-access-8lfn7\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.888961 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.889076 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmwlz\" (UniqueName: \"kubernetes.io/projected/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b-kube-api-access-mmwlz\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.898439 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b945b" event={"ID":"e071a9e9-d1fa-41c2-a0b4-3ddc2470055b","Type":"ContainerDied","Data":"9e99bc7a71cde3a1726dccb420d69877ac39ece8a909aee37a1f367da53f1236"} Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.898798 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b945b" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.898827 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e99bc7a71cde3a1726dccb420d69877ac39ece8a909aee37a1f367da53f1236" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.919718 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5fd-account-create-update-8s8md" event={"ID":"3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd","Type":"ContainerDied","Data":"0a13c2c88a06c50133a2555869f297c8c47bf2195f08e760fef3c19438536755"} Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.919767 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a13c2c88a06c50133a2555869f297c8c47bf2195f08e760fef3c19438536755" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.919874 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5fd-account-create-update-8s8md" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.929446 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=55.762468398 podStartE2EDuration="1m0.929416572s" podCreationTimestamp="2026-02-02 13:21:34 +0000 UTC" firstStartedPulling="2026-02-02 13:21:52.432697638 +0000 UTC m=+1252.735212027" lastFinishedPulling="2026-02-02 13:21:57.599645812 +0000 UTC m=+1257.902160201" observedRunningTime="2026-02-02 13:22:34.906667977 +0000 UTC m=+1295.209182366" watchObservedRunningTime="2026-02-02 13:22:34.929416572 +0000 UTC m=+1295.231930961" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.992745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rlj\" (UniqueName: \"kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.993135 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.993234 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:34 crc kubenswrapper[4721]: I0202 13:22:34.993327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.001238 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.004399 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.011764 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.015721 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rlj\" (UniqueName: \"kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj\") pod \"glance-db-sync-hrqtc\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.061731 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.124399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hrqtc" Feb 02 13:22:35 crc kubenswrapper[4721]: I0202 13:22:35.783282 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hrqtc"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:35.931212 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hrqtc" event={"ID":"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d","Type":"ContainerStarted","Data":"bcc0541c75c4c63b75ba7d9003f7cf6d54e2e725d204aed88938ca8246ddf26a"} Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:35.936553 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerStarted","Data":"048ea46306d4b172a5e08114cafc091bb471ab145b3144b948269acd23158658"} Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:35.974708 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.474550418 podStartE2EDuration="54.974690863s" podCreationTimestamp="2026-02-02 13:21:41 +0000 UTC" firstStartedPulling="2026-02-02 13:21:54.269860517 +0000 UTC m=+1254.572374906" lastFinishedPulling="2026-02-02 13:22:34.770000962 +0000 UTC m=+1295.072515351" observedRunningTime="2026-02-02 13:22:35.959107432 +0000 UTC m=+1296.261621841" watchObservedRunningTime="2026-02-02 13:22:35.974690863 +0000 UTC m=+1296.277205252" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.151932 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-29h28"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.153368 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.160133 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.165960 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-29h28"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.325417 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.325638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99nt\" (UniqueName: \"kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.427331 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99nt\" (UniqueName: \"kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.427478 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.428639 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.492736 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99nt\" (UniqueName: \"kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt\") pod \"root-account-create-update-29h28\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.708891 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.710421 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.721012 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.787015 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-29h28" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.838391 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5s7g\" (UniqueName: \"kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.838436 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.896258 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.938537 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ed80-account-create-update-w8c4k"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.940464 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.941219 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5s7g\" (UniqueName: \"kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.941266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.942851 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.950224 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.975452 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ed80-account-create-update-w8c4k"] Feb 02 13:22:36 crc kubenswrapper[4721]: I0202 13:22:36.980014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5s7g\" (UniqueName: \"kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g\") pod \"mysqld-exporter-openstack-cell1-db-create-4hfg5\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.023970 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.024300 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="dnsmasq-dns" containerID="cri-o://6404bb6f1951c91af9f8453984cfaf480f6307bbda26d7f2b85d0b4cee4e2109" gracePeriod=10 Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.029357 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.043663 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.043769 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt798\" (UniqueName: \"kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.146385 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.146514 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt798\" (UniqueName: \"kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.148003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.174268 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt798\" (UniqueName: \"kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798\") pod \"mysqld-exporter-ed80-account-create-update-w8c4k\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.269991 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.662336 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-29h28"] Feb 02 13:22:37 crc kubenswrapper[4721]: W0202 13:22:37.690035 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca971a3a_e7fd_4f31_be3d_aff5722ad49f.slice/crio-5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c WatchSource:0}: Error finding container 5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c: Status 404 returned error can't find the container with id 5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.867500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5"] Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.968780 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-29h28" event={"ID":"ca971a3a-e7fd-4f31-be3d-aff5722ad49f","Type":"ContainerStarted","Data":"5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c"} Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.970126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" event={"ID":"5b1f70a8-6b41-4823-991b-934510a608fd","Type":"ContainerStarted","Data":"71750a46c65416c25c5b66883a4f5da230a09a9166598a5e808e638dc3a967a8"} Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.973452 4721 generic.go:334] "Generic (PLEG): container finished" podID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerID="6404bb6f1951c91af9f8453984cfaf480f6307bbda26d7f2b85d0b4cee4e2109" exitCode=0 Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.973512 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" event={"ID":"02b496f0-c99d-43e9-9e8a-03286d8966ab","Type":"ContainerDied","Data":"6404bb6f1951c91af9f8453984cfaf480f6307bbda26d7f2b85d0b4cee4e2109"} Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.973536 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" event={"ID":"02b496f0-c99d-43e9-9e8a-03286d8966ab","Type":"ContainerDied","Data":"d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e"} Feb 02 13:22:37 crc kubenswrapper[4721]: I0202 13:22:37.973549 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d756b18dd24d6e38b37f9d5682fc7973ac79057046e2526e8afd7ae562cccd1e" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.015800 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.075745 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ed80-account-create-update-w8c4k"] Feb 02 13:22:38 crc kubenswrapper[4721]: W0202 13:22:38.086009 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0af979c8_207f_455c_b383_fd22b1ec6758.slice/crio-337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2 WatchSource:0}: Error finding container 337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2: Status 404 returned error can't find the container with id 337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2 Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.111520 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.179558 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config\") pod \"02b496f0-c99d-43e9-9e8a-03286d8966ab\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.179631 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc\") pod \"02b496f0-c99d-43e9-9e8a-03286d8966ab\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.179842 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6v4\" (UniqueName: \"kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4\") pod \"02b496f0-c99d-43e9-9e8a-03286d8966ab\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.179902 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb\") pod \"02b496f0-c99d-43e9-9e8a-03286d8966ab\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.180032 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb\") pod \"02b496f0-c99d-43e9-9e8a-03286d8966ab\" (UID: \"02b496f0-c99d-43e9-9e8a-03286d8966ab\") " Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.199291 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4" (OuterVolumeSpecName: "kube-api-access-dj6v4") pod "02b496f0-c99d-43e9-9e8a-03286d8966ab" (UID: "02b496f0-c99d-43e9-9e8a-03286d8966ab"). InnerVolumeSpecName "kube-api-access-dj6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.286049 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6v4\" (UniqueName: \"kubernetes.io/projected/02b496f0-c99d-43e9-9e8a-03286d8966ab-kube-api-access-dj6v4\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.296916 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02b496f0-c99d-43e9-9e8a-03286d8966ab" (UID: "02b496f0-c99d-43e9-9e8a-03286d8966ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.306893 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02b496f0-c99d-43e9-9e8a-03286d8966ab" (UID: "02b496f0-c99d-43e9-9e8a-03286d8966ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.326624 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config" (OuterVolumeSpecName: "config") pod "02b496f0-c99d-43e9-9e8a-03286d8966ab" (UID: "02b496f0-c99d-43e9-9e8a-03286d8966ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.335629 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02b496f0-c99d-43e9-9e8a-03286d8966ab" (UID: "02b496f0-c99d-43e9-9e8a-03286d8966ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.388568 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.388614 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.388626 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.388636 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b496f0-c99d-43e9-9e8a-03286d8966ab-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.490531 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:38 crc kubenswrapper[4721]: E0202 13:22:38.490799 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 13:22:38 crc kubenswrapper[4721]: E0202 13:22:38.490833 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 13:22:38 crc kubenswrapper[4721]: E0202 13:22:38.490902 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift podName:eabe6b07-da9d-4980-99b4-12c02640c88d nodeName:}" failed. No retries permitted until 2026-02-02 13:22:54.490881071 +0000 UTC m=+1314.793395460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift") pod "swift-storage-0" (UID: "eabe6b07-da9d-4980-99b4-12c02640c88d") : configmap "swift-ring-files" not found Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.986927 4721 generic.go:334] "Generic (PLEG): container finished" podID="5b1f70a8-6b41-4823-991b-934510a608fd" containerID="786d3f2324d84717b8e858d54067e88e6ae7b7e91f7a2cafa3e176317085dd6f" exitCode=0 Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.987303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" event={"ID":"5b1f70a8-6b41-4823-991b-934510a608fd","Type":"ContainerDied","Data":"786d3f2324d84717b8e858d54067e88e6ae7b7e91f7a2cafa3e176317085dd6f"} Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.993398 4721 generic.go:334] "Generic (PLEG): container finished" podID="0af979c8-207f-455c-b383-fd22b1ec6758" containerID="c6d0cc979d5c7bfcd7c17e38f66db5aa66eb2098b9d5dff2ff5da7fb49088c43" exitCode=0 Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.993532 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" event={"ID":"0af979c8-207f-455c-b383-fd22b1ec6758","Type":"ContainerDied","Data":"c6d0cc979d5c7bfcd7c17e38f66db5aa66eb2098b9d5dff2ff5da7fb49088c43"} Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.993561 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" event={"ID":"0af979c8-207f-455c-b383-fd22b1ec6758","Type":"ContainerStarted","Data":"337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2"} Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.995595 4721 generic.go:334] "Generic (PLEG): container finished" podID="ca971a3a-e7fd-4f31-be3d-aff5722ad49f" containerID="74a99b13280ba5e058fb97f392a9a2baa22e1224fb962f08950b59d7a1606135" exitCode=0 Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.995667 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-x9n8f" Feb 02 13:22:38 crc kubenswrapper[4721]: I0202 13:22:38.995801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-29h28" event={"ID":"ca971a3a-e7fd-4f31-be3d-aff5722ad49f","Type":"ContainerDied","Data":"74a99b13280ba5e058fb97f392a9a2baa22e1224fb962f08950b59d7a1606135"} Feb 02 13:22:39 crc kubenswrapper[4721]: I0202 13:22:39.051720 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:39 crc kubenswrapper[4721]: I0202 13:22:39.060083 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-x9n8f"] Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.007561 4721 generic.go:334] "Generic (PLEG): container finished" podID="bd1f15d5-77dc-4b6d-81bf-c2a8286da820" containerID="6cc2ebc0d6497a1bebde8bd624dd9a320b42cd5a52feffbd34983448a9d2b4a3" exitCode=0 Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.007687 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rnrx" event={"ID":"bd1f15d5-77dc-4b6d-81bf-c2a8286da820","Type":"ContainerDied","Data":"6cc2ebc0d6497a1bebde8bd624dd9a320b42cd5a52feffbd34983448a9d2b4a3"} Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.459567 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" path="/var/lib/kubelet/pods/02b496f0-c99d-43e9-9e8a-03286d8966ab/volumes" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.833302 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-29h28" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.838990 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.850059 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946219 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts\") pod \"5b1f70a8-6b41-4823-991b-934510a608fd\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946273 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts\") pod \"0af979c8-207f-455c-b383-fd22b1ec6758\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946422 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5s7g\" (UniqueName: \"kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g\") pod \"5b1f70a8-6b41-4823-991b-934510a608fd\" (UID: \"5b1f70a8-6b41-4823-991b-934510a608fd\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946473 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts\") pod \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946499 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99nt\" (UniqueName: \"kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt\") pod \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\" (UID: \"ca971a3a-e7fd-4f31-be3d-aff5722ad49f\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.946521 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt798\" (UniqueName: \"kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798\") pod \"0af979c8-207f-455c-b383-fd22b1ec6758\" (UID: \"0af979c8-207f-455c-b383-fd22b1ec6758\") " Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.947266 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca971a3a-e7fd-4f31-be3d-aff5722ad49f" (UID: "ca971a3a-e7fd-4f31-be3d-aff5722ad49f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.947376 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b1f70a8-6b41-4823-991b-934510a608fd" (UID: "5b1f70a8-6b41-4823-991b-934510a608fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.947399 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0af979c8-207f-455c-b383-fd22b1ec6758" (UID: "0af979c8-207f-455c-b383-fd22b1ec6758"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.976737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798" (OuterVolumeSpecName: "kube-api-access-nt798") pod "0af979c8-207f-455c-b383-fd22b1ec6758" (UID: "0af979c8-207f-455c-b383-fd22b1ec6758"). InnerVolumeSpecName "kube-api-access-nt798". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.977021 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt" (OuterVolumeSpecName: "kube-api-access-q99nt") pod "ca971a3a-e7fd-4f31-be3d-aff5722ad49f" (UID: "ca971a3a-e7fd-4f31-be3d-aff5722ad49f"). InnerVolumeSpecName "kube-api-access-q99nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:40 crc kubenswrapper[4721]: I0202 13:22:40.978254 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g" (OuterVolumeSpecName: "kube-api-access-g5s7g") pod "5b1f70a8-6b41-4823-991b-934510a608fd" (UID: "5b1f70a8-6b41-4823-991b-934510a608fd"). InnerVolumeSpecName "kube-api-access-g5s7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.022256 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.024597 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5" event={"ID":"5b1f70a8-6b41-4823-991b-934510a608fd","Type":"ContainerDied","Data":"71750a46c65416c25c5b66883a4f5da230a09a9166598a5e808e638dc3a967a8"} Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.024645 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71750a46c65416c25c5b66883a4f5da230a09a9166598a5e808e638dc3a967a8" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.033923 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" event={"ID":"0af979c8-207f-455c-b383-fd22b1ec6758","Type":"ContainerDied","Data":"337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2"} Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.033971 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337291d05bad9f4bf913d361a804dc119df0d75bc05d5d9d53f3055d24ce4bd2" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.034121 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ed80-account-create-update-w8c4k" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050214 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5s7g\" (UniqueName: \"kubernetes.io/projected/5b1f70a8-6b41-4823-991b-934510a608fd-kube-api-access-g5s7g\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050240 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050251 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99nt\" (UniqueName: \"kubernetes.io/projected/ca971a3a-e7fd-4f31-be3d-aff5722ad49f-kube-api-access-q99nt\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050259 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt798\" (UniqueName: \"kubernetes.io/projected/0af979c8-207f-455c-b383-fd22b1ec6758-kube-api-access-nt798\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050270 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b1f70a8-6b41-4823-991b-934510a608fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050279 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0af979c8-207f-455c-b383-fd22b1ec6758-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050476 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-29h28" event={"ID":"ca971a3a-e7fd-4f31-be3d-aff5722ad49f","Type":"ContainerDied","Data":"5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c"} Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050515 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db7592d5d6daeb2d6d714fe913b4d15bf0ef1454ed3e3239598bd3ede182d5c" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.050544 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-29h28" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.074961 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-l5h78" podUID="298ac2ef-6edb-40cb-bb92-8a8e039f333b" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:22:41 crc kubenswrapper[4721]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:22:41 crc kubenswrapper[4721]: > Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.439038 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.560409 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.560752 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rm5\" (UniqueName: \"kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.560862 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.560946 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.561014 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.561111 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.561165 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle\") pod \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\" (UID: \"bd1f15d5-77dc-4b6d-81bf-c2a8286da820\") " Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.565268 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.565891 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.568834 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5" (OuterVolumeSpecName: "kube-api-access-q5rm5") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "kube-api-access-q5rm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.570369 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.594962 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.598737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.601237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts" (OuterVolumeSpecName: "scripts") pod "bd1f15d5-77dc-4b6d-81bf-c2a8286da820" (UID: "bd1f15d5-77dc-4b6d-81bf-c2a8286da820"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663608 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663653 4721 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663669 4721 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663683 4721 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663695 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663708 4721 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:41 crc kubenswrapper[4721]: I0202 13:22:41.663722 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rm5\" (UniqueName: \"kubernetes.io/projected/bd1f15d5-77dc-4b6d-81bf-c2a8286da820-kube-api-access-q5rm5\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:42 crc kubenswrapper[4721]: I0202 13:22:42.080574 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rnrx" event={"ID":"bd1f15d5-77dc-4b6d-81bf-c2a8286da820","Type":"ContainerDied","Data":"75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812"} Feb 02 13:22:42 crc kubenswrapper[4721]: I0202 13:22:42.081272 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75aba502874704cff954aba43521e9024690a740951c41d1966d4ffb33d85812" Feb 02 13:22:42 crc kubenswrapper[4721]: I0202 13:22:42.081278 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rnrx" Feb 02 13:22:42 crc kubenswrapper[4721]: I0202 13:22:42.794944 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-29h28"] Feb 02 13:22:42 crc kubenswrapper[4721]: I0202 13:22:42.806755 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-29h28"] Feb 02 13:22:43 crc kubenswrapper[4721]: I0202 13:22:43.111726 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:43 crc kubenswrapper[4721]: I0202 13:22:43.114393 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:44 crc kubenswrapper[4721]: I0202 13:22:44.102666 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:44 crc kubenswrapper[4721]: I0202 13:22:44.421509 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca971a3a-e7fd-4f31-be3d-aff5722ad49f" path="/var/lib/kubelet/pods/ca971a3a-e7fd-4f31-be3d-aff5722ad49f/volumes" Feb 02 13:22:45 crc kubenswrapper[4721]: I0202 13:22:45.827511 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a57cea33-806c-4028-b59f-9f5e65289eac" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 02 13:22:45 crc kubenswrapper[4721]: I0202 13:22:45.986675 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-l5h78" podUID="298ac2ef-6edb-40cb-bb92-8a8e039f333b" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:22:45 crc kubenswrapper[4721]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:22:45 crc kubenswrapper[4721]: > Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.011544 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.024761 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gz9nz" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.177922 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="4d21d961-1540-4610-89c0-ee265f66d728" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.178883 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="b6dbd607-3fa8-48e0-b420-4e939a47c460" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.190993 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nftsl"] Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.191719 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="init" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.191817 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="init" Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.192664 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="dnsmasq-dns" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.192765 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="dnsmasq-dns" Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.192846 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1f70a8-6b41-4823-991b-934510a608fd" containerName="mariadb-database-create" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.192930 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1f70a8-6b41-4823-991b-934510a608fd" containerName="mariadb-database-create" Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.193022 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1f15d5-77dc-4b6d-81bf-c2a8286da820" containerName="swift-ring-rebalance" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.193131 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1f15d5-77dc-4b6d-81bf-c2a8286da820" containerName="swift-ring-rebalance" Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.193235 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af979c8-207f-455c-b383-fd22b1ec6758" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.193313 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af979c8-207f-455c-b383-fd22b1ec6758" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: E0202 13:22:46.193413 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca971a3a-e7fd-4f31-be3d-aff5722ad49f" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.193489 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca971a3a-e7fd-4f31-be3d-aff5722ad49f" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.193905 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af979c8-207f-455c-b383-fd22b1ec6758" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.194008 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b496f0-c99d-43e9-9e8a-03286d8966ab" containerName="dnsmasq-dns" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.194113 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1f15d5-77dc-4b6d-81bf-c2a8286da820" containerName="swift-ring-rebalance" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.194215 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1f70a8-6b41-4823-991b-934510a608fd" containerName="mariadb-database-create" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.194292 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca971a3a-e7fd-4f31-be3d-aff5722ad49f" containerName="mariadb-account-create-update" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.195387 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.198466 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.212754 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nftsl"] Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.259993 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l5h78-config-5bmh9"] Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.262098 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.264670 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.270884 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78-config-5bmh9"] Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.274316 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.274509 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpl4j\" (UniqueName: \"kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376018 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376153 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376250 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpl4j\" (UniqueName: \"kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376282 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376347 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376386 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xf8\" (UniqueName: \"kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.376510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.377419 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.397397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpl4j\" (UniqueName: \"kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j\") pod \"root-account-create-update-nftsl\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478490 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xf8\" (UniqueName: \"kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478595 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478660 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478685 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478744 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478857 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.478950 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.479659 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.480874 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.497554 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xf8\" (UniqueName: \"kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8\") pod \"ovn-controller-l5h78-config-5bmh9\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.525491 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.567291 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.596645 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.859243 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.860668 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.862705 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.873079 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.990090 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jz54\" (UniqueName: \"kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.990212 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:46 crc kubenswrapper[4721]: I0202 13:22:46.990263 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.092715 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jz54\" (UniqueName: \"kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.092773 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.092909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.097579 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.117797 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.123838 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jz54\" (UniqueName: \"kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54\") pod \"mysqld-exporter-0\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " pod="openstack/mysqld-exporter-0" Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.163987 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.164364 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" containerID="cri-o://ee494f7d068c5f80477a3c22c3a31526d01c4a02a690967dc4ac1b911a158a98" gracePeriod=600 Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.164505 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="config-reloader" containerID="cri-o://cac1e1e212591677852178e9111f4a55b1b7ae8d7ed3f2354152657d155f18e1" gracePeriod=600 Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.164516 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="thanos-sidecar" containerID="cri-o://048ea46306d4b172a5e08114cafc091bb471ab145b3144b948269acd23158658" gracePeriod=600 Feb 02 13:22:47 crc kubenswrapper[4721]: I0202 13:22:47.187053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.130785 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155279 4721 generic.go:334] "Generic (PLEG): container finished" podID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerID="048ea46306d4b172a5e08114cafc091bb471ab145b3144b948269acd23158658" exitCode=0 Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155316 4721 generic.go:334] "Generic (PLEG): container finished" podID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerID="cac1e1e212591677852178e9111f4a55b1b7ae8d7ed3f2354152657d155f18e1" exitCode=0 Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155324 4721 generic.go:334] "Generic (PLEG): container finished" podID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerID="ee494f7d068c5f80477a3c22c3a31526d01c4a02a690967dc4ac1b911a158a98" exitCode=0 Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerDied","Data":"048ea46306d4b172a5e08114cafc091bb471ab145b3144b948269acd23158658"} Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155378 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerDied","Data":"cac1e1e212591677852178e9111f4a55b1b7ae8d7ed3f2354152657d155f18e1"} Feb 02 13:22:48 crc kubenswrapper[4721]: I0202 13:22:48.155393 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerDied","Data":"ee494f7d068c5f80477a3c22c3a31526d01c4a02a690967dc4ac1b911a158a98"} Feb 02 13:22:50 crc kubenswrapper[4721]: I0202 13:22:50.981884 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-l5h78" podUID="298ac2ef-6edb-40cb-bb92-8a8e039f333b" containerName="ovn-controller" probeResult="failure" output=< Feb 02 13:22:50 crc kubenswrapper[4721]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 13:22:50 crc kubenswrapper[4721]: > Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.112772 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 02 13:22:53 crc kubenswrapper[4721]: E0202 13:22:53.232890 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 02 13:22:53 crc kubenswrapper[4721]: E0202 13:22:53.233156 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5rlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-hrqtc_openstack(0531b398-2d44-42c2-bd6c-9e9f7ab8c85d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:22:53 crc kubenswrapper[4721]: E0202 13:22:53.234444 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-hrqtc" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.351878 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78-config-5bmh9"] Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.388016 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nftsl"] Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.597754 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.603764 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.615509 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.675557 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.675933 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676090 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676243 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676269 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq5th\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676295 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676311 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676338 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676386 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1\") pod \"754114a2-a012-43fe-923b-a8cc3df91aa0\" (UID: \"754114a2-a012-43fe-923b-a8cc3df91aa0\") " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.676788 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.677818 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.679440 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.683448 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config" (OuterVolumeSpecName: "config") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.683913 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out" (OuterVolumeSpecName: "config-out") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.685011 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.685118 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th" (OuterVolumeSpecName: "kube-api-access-nq5th") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "kube-api-access-nq5th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.693437 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.721770 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.729321 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config" (OuterVolumeSpecName: "web-config") pod "754114a2-a012-43fe-923b-a8cc3df91aa0" (UID: "754114a2-a012-43fe-923b-a8cc3df91aa0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779272 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779304 4721 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-web-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779337 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") on node \"crc\" " Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779349 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq5th\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-kube-api-access-nq5th\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779359 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779369 4721 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/754114a2-a012-43fe-923b-a8cc3df91aa0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779378 4721 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/754114a2-a012-43fe-923b-a8cc3df91aa0-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779388 4721 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/754114a2-a012-43fe-923b-a8cc3df91aa0-config-out\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779398 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.779407 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/754114a2-a012-43fe-923b-a8cc3df91aa0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.814749 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.814944 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3") on node "crc" Feb 02 13:22:53 crc kubenswrapper[4721]: I0202 13:22:53.881788 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.231134 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"754114a2-a012-43fe-923b-a8cc3df91aa0","Type":"ContainerDied","Data":"1ae72f1d8842f29156e47082662e67e943468c1763e7bc830d0036db2470a455"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.231229 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.232366 4721 scope.go:117] "RemoveContainer" containerID="048ea46306d4b172a5e08114cafc091bb471ab145b3144b948269acd23158658" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.232835 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7a6930c7-1819-4b7d-baf6-773a8b68e568","Type":"ContainerStarted","Data":"29066421e6cce726a66f30e6952c937493f0f81dbe0ff9779f6c880b60322c1e"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.239661 4721 generic.go:334] "Generic (PLEG): container finished" podID="ffb2d2fc-d882-4343-a182-5d4cae12692f" containerID="80fd8d4c523d759364a503f4e0957e812daf0265c52b4e63f92c47e96ac7e275" exitCode=0 Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.239779 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-5bmh9" event={"ID":"ffb2d2fc-d882-4343-a182-5d4cae12692f","Type":"ContainerDied","Data":"80fd8d4c523d759364a503f4e0957e812daf0265c52b4e63f92c47e96ac7e275"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.239815 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-5bmh9" event={"ID":"ffb2d2fc-d882-4343-a182-5d4cae12692f","Type":"ContainerStarted","Data":"834db9ab0bc9ea710cdeeae1fdd49ed39cc3b00bb16ebc533c33cc39366d14fc"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.242128 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d5ab80d-40d3-4259-a13a-efb59d66b725" containerID="a4049e92c383c0eb65178e6eb81956222b0a85112475c87886800360320c1322" exitCode=0 Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.242203 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nftsl" event={"ID":"3d5ab80d-40d3-4259-a13a-efb59d66b725","Type":"ContainerDied","Data":"a4049e92c383c0eb65178e6eb81956222b0a85112475c87886800360320c1322"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.242226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nftsl" event={"ID":"3d5ab80d-40d3-4259-a13a-efb59d66b725","Type":"ContainerStarted","Data":"28498d7488cd8662de9104fba2a52e44054093c2b223f3e35ab776768caaf308"} Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.259654 4721 scope.go:117] "RemoveContainer" containerID="cac1e1e212591677852178e9111f4a55b1b7ae8d7ed3f2354152657d155f18e1" Feb 02 13:22:54 crc kubenswrapper[4721]: E0202 13:22:54.259888 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-hrqtc" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.304724 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.320238 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.323009 4721 scope.go:117] "RemoveContainer" containerID="ee494f7d068c5f80477a3c22c3a31526d01c4a02a690967dc4ac1b911a158a98" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.350371 4721 scope.go:117] "RemoveContainer" containerID="f51717e20215c350c32f4c0b374e223fc4a90aedd712f943c01301452ac10dc6" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.352285 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:54 crc kubenswrapper[4721]: E0202 13:22:54.360685 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="init-config-reloader" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.360752 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="init-config-reloader" Feb 02 13:22:54 crc kubenswrapper[4721]: E0202 13:22:54.360849 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="config-reloader" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.360859 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="config-reloader" Feb 02 13:22:54 crc kubenswrapper[4721]: E0202 13:22:54.360874 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="thanos-sidecar" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.360970 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="thanos-sidecar" Feb 02 13:22:54 crc kubenswrapper[4721]: E0202 13:22:54.361020 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.361029 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.361500 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="prometheus" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.361517 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="thanos-sidecar" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.361534 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" containerName="config-reloader" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.365673 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.367953 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xt8ls" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.373403 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.373406 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.373744 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.374318 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.374414 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.375781 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.383406 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.385116 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.388243 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.427824 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754114a2-a012-43fe-923b-a8cc3df91aa0" path="/var/lib/kubelet/pods/754114a2-a012-43fe-923b-a8cc3df91aa0/volumes" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500044 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500182 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500232 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500318 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500356 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500403 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500430 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prj5q\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-kube-api-access-prj5q\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500522 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500548 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500663 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500682 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a34d077-087f-4b04-98c5-22e09450dcb3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500772 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.500805 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.511337 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eabe6b07-da9d-4980-99b4-12c02640c88d-etc-swift\") pod \"swift-storage-0\" (UID: \"eabe6b07-da9d-4980-99b4-12c02640c88d\") " pod="openstack/swift-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.515061 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.602956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.604407 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a34d077-087f-4b04-98c5-22e09450dcb3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.604622 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.604750 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.604943 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.605172 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.605342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.605444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.605561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.605662 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prj5q\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-kube-api-access-prj5q\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.606646 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.606751 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.606995 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.609819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.624525 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.625098 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.625395 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.625421 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8f571356fe2cb3489fcae1580a11d6ada33fdec8c8d1e0850e45e91197c9652/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.626411 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.626625 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a34d077-087f-4b04-98c5-22e09450dcb3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.626810 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.626998 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.630560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.631946 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.632041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a34d077-087f-4b04-98c5-22e09450dcb3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.632095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prj5q\" (UniqueName: \"kubernetes.io/projected/6a34d077-087f-4b04-98c5-22e09450dcb3-kube-api-access-prj5q\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.635325 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6a34d077-087f-4b04-98c5-22e09450dcb3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.690435 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1c836ef-ba24-4c50-a59b-e33260ae1ac3\") pod \"prometheus-metric-storage-0\" (UID: \"6a34d077-087f-4b04-98c5-22e09450dcb3\") " pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:54 crc kubenswrapper[4721]: I0202 13:22:54.704690 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.827877 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.889028 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.929813 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.944236 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpl4j\" (UniqueName: \"kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j\") pod \"3d5ab80d-40d3-4259-a13a-efb59d66b725\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.944451 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts\") pod \"3d5ab80d-40d3-4259-a13a-efb59d66b725\" (UID: \"3d5ab80d-40d3-4259-a13a-efb59d66b725\") " Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.945564 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d5ab80d-40d3-4259-a13a-efb59d66b725" (UID: "3d5ab80d-40d3-4259-a13a-efb59d66b725"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.946803 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5ab80d-40d3-4259-a13a-efb59d66b725-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.959743 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j" (OuterVolumeSpecName: "kube-api-access-fpl4j") pod "3d5ab80d-40d3-4259-a13a-efb59d66b725" (UID: "3d5ab80d-40d3-4259-a13a-efb59d66b725"). InnerVolumeSpecName "kube-api-access-fpl4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:55 crc kubenswrapper[4721]: I0202 13:22:55.987678 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 02 13:22:55 crc kubenswrapper[4721]: W0202 13:22:55.989018 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a34d077_087f_4b04_98c5_22e09450dcb3.slice/crio-210ff9e6575bb6d9c6f06a24d54a63c727752ea3d455cc8dcc85c4b4bb3b8827 WatchSource:0}: Error finding container 210ff9e6575bb6d9c6f06a24d54a63c727752ea3d455cc8dcc85c4b4bb3b8827: Status 404 returned error can't find the container with id 210ff9e6575bb6d9c6f06a24d54a63c727752ea3d455cc8dcc85c4b4bb3b8827 Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.033982 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-l5h78" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048478 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048582 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048794 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8xf8\" (UniqueName: \"kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048836 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048863 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.048891 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run\") pod \"ffb2d2fc-d882-4343-a182-5d4cae12692f\" (UID: \"ffb2d2fc-d882-4343-a182-5d4cae12692f\") " Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.049619 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpl4j\" (UniqueName: \"kubernetes.io/projected/3d5ab80d-40d3-4259-a13a-efb59d66b725-kube-api-access-fpl4j\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.049664 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run" (OuterVolumeSpecName: "var-run") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.049929 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.049970 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.050051 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.050596 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts" (OuterVolumeSpecName: "scripts") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.063478 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8" (OuterVolumeSpecName: "kube-api-access-t8xf8") pod "ffb2d2fc-d882-4343-a182-5d4cae12692f" (UID: "ffb2d2fc-d882-4343-a182-5d4cae12692f"). InnerVolumeSpecName "kube-api-access-t8xf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153871 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153914 4721 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153927 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8xf8\" (UniqueName: \"kubernetes.io/projected/ffb2d2fc-d882-4343-a182-5d4cae12692f-kube-api-access-t8xf8\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153941 4721 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb2d2fc-d882-4343-a182-5d4cae12692f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153954 4721 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.153964 4721 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ffb2d2fc-d882-4343-a182-5d4cae12692f-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.177493 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.180456 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.180886 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.295566 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"2b37f2dfa80fc68b4e9a756361e61309fa1a16513ea3496c3920cb4c670eb9cf"} Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.310127 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nftsl" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.310726 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nftsl" event={"ID":"3d5ab80d-40d3-4259-a13a-efb59d66b725","Type":"ContainerDied","Data":"28498d7488cd8662de9104fba2a52e44054093c2b223f3e35ab776768caaf308"} Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.310793 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28498d7488cd8662de9104fba2a52e44054093c2b223f3e35ab776768caaf308" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.315929 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7a6930c7-1819-4b7d-baf6-773a8b68e568","Type":"ContainerStarted","Data":"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73"} Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.325349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerStarted","Data":"210ff9e6575bb6d9c6f06a24d54a63c727752ea3d455cc8dcc85c4b4bb3b8827"} Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.334765 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-5bmh9" event={"ID":"ffb2d2fc-d882-4343-a182-5d4cae12692f","Type":"ContainerDied","Data":"834db9ab0bc9ea710cdeeae1fdd49ed39cc3b00bb16ebc533c33cc39366d14fc"} Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.334808 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834db9ab0bc9ea710cdeeae1fdd49ed39cc3b00bb16ebc533c33cc39366d14fc" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.334816 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-5bmh9" Feb 02 13:22:56 crc kubenswrapper[4721]: I0202 13:22:56.360334 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=8.777589048 podStartE2EDuration="10.360310169s" podCreationTimestamp="2026-02-02 13:22:46 +0000 UTC" firstStartedPulling="2026-02-02 13:22:53.615188372 +0000 UTC m=+1313.917702761" lastFinishedPulling="2026-02-02 13:22:55.197909493 +0000 UTC m=+1315.500423882" observedRunningTime="2026-02-02 13:22:56.3558965 +0000 UTC m=+1316.658410889" watchObservedRunningTime="2026-02-02 13:22:56.360310169 +0000 UTC m=+1316.662824558" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.055046 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-l5h78-config-5bmh9"] Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.063773 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-l5h78-config-5bmh9"] Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.185746 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l5h78-config-ffx5v"] Feb 02 13:22:57 crc kubenswrapper[4721]: E0202 13:22:57.186313 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5ab80d-40d3-4259-a13a-efb59d66b725" containerName="mariadb-account-create-update" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.186339 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5ab80d-40d3-4259-a13a-efb59d66b725" containerName="mariadb-account-create-update" Feb 02 13:22:57 crc kubenswrapper[4721]: E0202 13:22:57.186417 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2d2fc-d882-4343-a182-5d4cae12692f" containerName="ovn-config" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.186429 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2d2fc-d882-4343-a182-5d4cae12692f" containerName="ovn-config" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.186683 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5ab80d-40d3-4259-a13a-efb59d66b725" containerName="mariadb-account-create-update" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.186723 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb2d2fc-d882-4343-a182-5d4cae12692f" containerName="ovn-config" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.187648 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.193570 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195035 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxj28\" (UniqueName: \"kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195215 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195258 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195412 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195463 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.195495 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.212699 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78-config-ffx5v"] Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297177 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297277 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxj28\" (UniqueName: \"kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297394 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297416 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297738 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.297795 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.298387 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.301329 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.329424 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxj28\" (UniqueName: \"kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28\") pod \"ovn-controller-l5h78-config-ffx5v\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.522147 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.805762 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nftsl"] Feb 02 13:22:57 crc kubenswrapper[4721]: I0202 13:22:57.816171 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nftsl"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.131304 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5h78-config-ffx5v"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.214858 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2whnq"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.216262 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.233747 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2whnq"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.402608 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"3db13f1bd0d6b258fdb59622f982629010d6c369785373bbc3462937f58c58ae"} Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.404001 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-ffx5v" event={"ID":"aad06c3d-8212-4a61-b491-2af939014fd6","Type":"ContainerStarted","Data":"4ada9cfeb6a025a91e338410d7105720ab2f63a7cd8ce8f9992f12746a0d195c"} Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.417370 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcg5\" (UniqueName: \"kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.417586 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.442663 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5ab80d-40d3-4259-a13a-efb59d66b725" path="/var/lib/kubelet/pods/3d5ab80d-40d3-4259-a13a-efb59d66b725/volumes" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.443287 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb2d2fc-d882-4343-a182-5d4cae12692f" path="/var/lib/kubelet/pods/ffb2d2fc-d882-4343-a182-5d4cae12692f/volumes" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.509182 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-15d5-account-create-update-5kl6r"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.520634 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.520741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcg5\" (UniqueName: \"kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.522817 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.528803 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.537301 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.553431 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-15d5-account-create-update-5kl6r"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.587555 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcg5\" (UniqueName: \"kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5\") pod \"barbican-db-create-2whnq\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.623377 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.623510 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twnj\" (UniqueName: \"kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.713059 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-xmp7t"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.714518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.726576 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.726716 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twnj\" (UniqueName: \"kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.727900 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.745678 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xmp7t"] Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.794931 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twnj\" (UniqueName: \"kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj\") pod \"barbican-15d5-account-create-update-5kl6r\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.833976 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.834493 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlbg\" (UniqueName: \"kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.866225 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2whnq" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.867791 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.946379 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrlbg\" (UniqueName: \"kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.946697 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:58 crc kubenswrapper[4721]: I0202 13:22:58.959765 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.048152 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-6f5e-account-create-update-nk8v6"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.081357 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wffvl"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.086597 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.091837 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.092650 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wffvl"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.092764 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.129645 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6f5e-account-create-update-nk8v6"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.173255 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrlbg\" (UniqueName: \"kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg\") pod \"heat-db-create-xmp7t\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.176007 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xmp7t" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.225981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.227981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspjq\" (UniqueName: \"kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.228880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckvd\" (UniqueName: \"kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.230021 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.276657 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xs4g5"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.278563 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.291756 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-219b-account-create-update-c48ml"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.293581 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.295571 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.310261 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xs4g5"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.326450 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-219b-account-create-update-c48ml"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.335899 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bspjq\" (UniqueName: \"kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.335965 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vckvd\" (UniqueName: \"kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.336012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.336063 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.336100 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8m4n\" (UniqueName: \"kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.336133 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.336906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.337838 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.342130 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wvp2n"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.343871 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.348792 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.350403 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.350763 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.351009 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z9s7" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.358199 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wvp2n"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.378229 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vckvd\" (UniqueName: \"kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd\") pod \"heat-6f5e-account-create-update-nk8v6\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.378750 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspjq\" (UniqueName: \"kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq\") pod \"cinder-db-create-wffvl\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.418330 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9865-account-create-update-5xd7v"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.420267 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.422563 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438456 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438530 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjqv\" (UniqueName: \"kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438603 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438662 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshcx\" (UniqueName: \"kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438737 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8m4n\" (UniqueName: \"kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.438785 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.439506 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.440473 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9865-account-create-update-5xd7v"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.449222 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-ffx5v" event={"ID":"aad06c3d-8212-4a61-b491-2af939014fd6","Type":"ContainerStarted","Data":"53b5d624776f4223952c5574dd921abb5bb1d5c538eeb2e730f20d491cd8ec2a"} Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.451371 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.468272 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wffvl" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.504963 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-l5h78-config-ffx5v" podStartSLOduration=2.504941958 podStartE2EDuration="2.504941958s" podCreationTimestamp="2026-02-02 13:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:22:59.491667769 +0000 UTC m=+1319.794182158" watchObservedRunningTime="2026-02-02 13:22:59.504941958 +0000 UTC m=+1319.807456347" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.524672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8m4n\" (UniqueName: \"kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n\") pod \"neutron-db-create-xs4g5\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540498 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xshcx\" (UniqueName: \"kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540747 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540780 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mfj\" (UniqueName: \"kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppjqv\" (UniqueName: \"kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540884 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.540966 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.544373 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.544552 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.544872 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.564588 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppjqv\" (UniqueName: \"kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv\") pod \"keystone-db-sync-wvp2n\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.574577 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xshcx\" (UniqueName: \"kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx\") pod \"cinder-219b-account-create-update-c48ml\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.618545 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xs4g5" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.642553 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mfj\" (UniqueName: \"kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.642622 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.643480 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.664682 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mfj\" (UniqueName: \"kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj\") pod \"neutron-9865-account-create-update-5xd7v\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.758545 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2whnq"] Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.761500 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.785897 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:22:59 crc kubenswrapper[4721]: I0202 13:22:59.959014 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.117712 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-15d5-account-create-update-5kl6r"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.139433 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xmp7t"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.543694 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"433b60737b3066082bb639518e6147a5a6503ebf9a4c06d09a6c97730c7824ba"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.544920 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"9afe80fe939a2ef5cddb97ad2017b2646fef9c58e3864240547b1fd3d8877107"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.544990 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"293cf162bd164320a8b9271bc53f14416d35b91812c9de3fdda210a674d4fe6c"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.599269 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerStarted","Data":"d77970247eaad2de283a9d77316bf0b2e70ed8392f8f537a13730da5d5109f30"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.622529 4721 generic.go:334] "Generic (PLEG): container finished" podID="aad06c3d-8212-4a61-b491-2af939014fd6" containerID="53b5d624776f4223952c5574dd921abb5bb1d5c538eeb2e730f20d491cd8ec2a" exitCode=0 Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.622603 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5h78-config-ffx5v" event={"ID":"aad06c3d-8212-4a61-b491-2af939014fd6","Type":"ContainerDied","Data":"53b5d624776f4223952c5574dd921abb5bb1d5c538eeb2e730f20d491cd8ec2a"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.658414 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2whnq" event={"ID":"937d142a-7868-4de2-85f3-90dcc5a74019","Type":"ContainerStarted","Data":"49e2f9d6a9f5b04c1ec533b19afe36a66018a912e5ef184f9b92ab178816de33"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.658447 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2whnq" event={"ID":"937d142a-7868-4de2-85f3-90dcc5a74019","Type":"ContainerStarted","Data":"ccbde8ddec5b3224b7082f9998adb72d6d3847a838f98d65a4e3c8f153647757"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.659975 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6f5e-account-create-update-nk8v6"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.663946 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.671315 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-15d5-account-create-update-5kl6r" event={"ID":"67f56b66-72ae-4c95-8051-dc5f7a0faec4","Type":"ContainerStarted","Data":"27b5beb216b40a6bf47c26d5501508f64c30a9831dd1df313dcf922bbdd6bfbf"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.671383 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-15d5-account-create-update-5kl6r" event={"ID":"67f56b66-72ae-4c95-8051-dc5f7a0faec4","Type":"ContainerStarted","Data":"44f1c86ebd459ef3f37a415120f2dce3ea862497ab8630574d3be6ad60426784"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.679829 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wffvl"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.688379 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xmp7t" event={"ID":"51bc4821-8b8e-4972-a90e-67a7a7b1fee5","Type":"ContainerStarted","Data":"5b1beefa4df5e3624727dd6828dfc7ef605a0f5abd94ccfd815abf8f0ff6c96a"} Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.706818 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xs4g5"] Feb 02 13:23:00 crc kubenswrapper[4721]: W0202 13:23:00.735597 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b707f0_c9cf_46b5_b615_4c0ab1da0391.slice/crio-f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6 WatchSource:0}: Error finding container f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6: Status 404 returned error can't find the container with id f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6 Feb 02 13:23:00 crc kubenswrapper[4721]: W0202 13:23:00.811131 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3395efa1_7b43_4b48_9e06_764b9428c5ab.slice/crio-9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa WatchSource:0}: Error finding container 9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa: Status 404 returned error can't find the container with id 9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.829905 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.867375 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.872647 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-219b-account-create-update-c48ml"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.884161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wvp2n"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.901382 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-15d5-account-create-update-5kl6r" podStartSLOduration=2.9013588820000002 podStartE2EDuration="2.901358882s" podCreationTimestamp="2026-02-02 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:00.768248313 +0000 UTC m=+1321.070762702" watchObservedRunningTime="2026-02-02 13:23:00.901358882 +0000 UTC m=+1321.203873261" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.913997 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-2whnq" podStartSLOduration=2.913975943 podStartE2EDuration="2.913975943s" podCreationTimestamp="2026-02-02 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:00.796822785 +0000 UTC m=+1321.099337174" watchObservedRunningTime="2026-02-02 13:23:00.913975943 +0000 UTC m=+1321.216490342" Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.934890 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9865-account-create-update-5xd7v"] Feb 02 13:23:00 crc kubenswrapper[4721]: I0202 13:23:00.937496 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-xmp7t" podStartSLOduration=2.9374794680000003 podStartE2EDuration="2.937479468s" podCreationTimestamp="2026-02-02 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:00.81143308 +0000 UTC m=+1321.113947469" watchObservedRunningTime="2026-02-02 13:23:00.937479468 +0000 UTC m=+1321.239993857" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.232583 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ncb4h"] Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.235196 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.238468 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.256747 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ncb4h"] Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.316021 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98wl\" (UniqueName: \"kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.316831 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.418790 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.418886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98wl\" (UniqueName: \"kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.419720 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.447780 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98wl\" (UniqueName: \"kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl\") pod \"root-account-create-update-ncb4h\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.616895 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.736569 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9865-account-create-update-5xd7v" event={"ID":"375b0aad-b921-41d8-af30-181ac4a73c0b","Type":"ContainerStarted","Data":"b4ecf1fb05394c16a116b372fdffdfa0e7375e6cf9e5a5e825266b2a826c68fd"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.736805 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9865-account-create-update-5xd7v" event={"ID":"375b0aad-b921-41d8-af30-181ac4a73c0b","Type":"ContainerStarted","Data":"974942fc39aed915ac1ed8e72bf664ad24873fb228a582d3a6ebf0cd4fa2ef59"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.755849 4721 generic.go:334] "Generic (PLEG): container finished" podID="937d142a-7868-4de2-85f3-90dcc5a74019" containerID="49e2f9d6a9f5b04c1ec533b19afe36a66018a912e5ef184f9b92ab178816de33" exitCode=0 Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.756042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2whnq" event={"ID":"937d142a-7868-4de2-85f3-90dcc5a74019","Type":"ContainerDied","Data":"49e2f9d6a9f5b04c1ec533b19afe36a66018a912e5ef184f9b92ab178816de33"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.761080 4721 generic.go:334] "Generic (PLEG): container finished" podID="67f56b66-72ae-4c95-8051-dc5f7a0faec4" containerID="27b5beb216b40a6bf47c26d5501508f64c30a9831dd1df313dcf922bbdd6bfbf" exitCode=0 Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.761152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-15d5-account-create-update-5kl6r" event={"ID":"67f56b66-72ae-4c95-8051-dc5f7a0faec4","Type":"ContainerDied","Data":"27b5beb216b40a6bf47c26d5501508f64c30a9831dd1df313dcf922bbdd6bfbf"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.763626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xs4g5" event={"ID":"45ad4533-c6a5-49da-8f33-23113f8b7fea","Type":"ContainerStarted","Data":"ebaaea23221e663dcf2fea49186c159f19607ea1b1253a2c8f769a54c895b470"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.763724 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xs4g5" event={"ID":"45ad4533-c6a5-49da-8f33-23113f8b7fea","Type":"ContainerStarted","Data":"29e7addf8e62cdc1322715c0687562f2bbdba871b07331508c1e9cd6c6eb1e35"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.765921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219b-account-create-update-c48ml" event={"ID":"46b707f0-c9cf-46b5-b615-4c0ab1da0391","Type":"ContainerStarted","Data":"fdd34df09b38c8d4122669f8f4cba7de1072b0822ab1da7e4f47ca0b7bdd4576"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.765989 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219b-account-create-update-c48ml" event={"ID":"46b707f0-c9cf-46b5-b615-4c0ab1da0391","Type":"ContainerStarted","Data":"f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.768054 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9865-account-create-update-5xd7v" podStartSLOduration=2.768027253 podStartE2EDuration="2.768027253s" podCreationTimestamp="2026-02-02 13:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:01.75644509 +0000 UTC m=+1322.058959479" watchObservedRunningTime="2026-02-02 13:23:01.768027253 +0000 UTC m=+1322.070541652" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.768357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f5e-account-create-update-nk8v6" event={"ID":"0d74a2f0-9f60-4f59-92e4-11b9136f1db5","Type":"ContainerStarted","Data":"c60a5b8d8e8a5c3310b5b5e67ee44c42a8f4c0e6c7827776fd046304bab7b307"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.768392 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f5e-account-create-update-nk8v6" event={"ID":"0d74a2f0-9f60-4f59-92e4-11b9136f1db5","Type":"ContainerStarted","Data":"169c0d730bc1c83a8cb544a1cea72333ffa248d90f19384ccb9cd170b9f67be0"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.769778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wvp2n" event={"ID":"3395efa1-7b43-4b48-9e06-764b9428c5ab","Type":"ContainerStarted","Data":"9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.779636 4721 generic.go:334] "Generic (PLEG): container finished" podID="13666544-a226-43ee-84c9-3232e9fff8d4" containerID="11d4406ea2aeef5800b1d48d5c16350e8f64df4bb7e540c2b8bb59f164e7298a" exitCode=0 Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.779829 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wffvl" event={"ID":"13666544-a226-43ee-84c9-3232e9fff8d4","Type":"ContainerDied","Data":"11d4406ea2aeef5800b1d48d5c16350e8f64df4bb7e540c2b8bb59f164e7298a"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.779896 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wffvl" event={"ID":"13666544-a226-43ee-84c9-3232e9fff8d4","Type":"ContainerStarted","Data":"0f7322d4b1604bf0b23f8dfd7fd198bcd3d01cc971e9e7e970f47421793f65ea"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.787660 4721 generic.go:334] "Generic (PLEG): container finished" podID="51bc4821-8b8e-4972-a90e-67a7a7b1fee5" containerID="075500f50301a059fcb64d3fd73cd025d86472e12f54ce995ede7cc3876a10cc" exitCode=0 Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.787729 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xmp7t" event={"ID":"51bc4821-8b8e-4972-a90e-67a7a7b1fee5","Type":"ContainerDied","Data":"075500f50301a059fcb64d3fd73cd025d86472e12f54ce995ede7cc3876a10cc"} Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.802766 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-6f5e-account-create-update-nk8v6" podStartSLOduration=3.802747621 podStartE2EDuration="3.802747621s" podCreationTimestamp="2026-02-02 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:01.795266239 +0000 UTC m=+1322.097780638" watchObservedRunningTime="2026-02-02 13:23:01.802747621 +0000 UTC m=+1322.105262010" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.900335 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-xs4g5" podStartSLOduration=2.900250958 podStartE2EDuration="2.900250958s" podCreationTimestamp="2026-02-02 13:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:01.813817531 +0000 UTC m=+1322.116331920" watchObservedRunningTime="2026-02-02 13:23:01.900250958 +0000 UTC m=+1322.202765347" Feb 02 13:23:01 crc kubenswrapper[4721]: I0202 13:23:01.913395 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-219b-account-create-update-c48ml" podStartSLOduration=2.913375462 podStartE2EDuration="2.913375462s" podCreationTimestamp="2026-02-02 13:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:01.848436487 +0000 UTC m=+1322.150950906" watchObservedRunningTime="2026-02-02 13:23:01.913375462 +0000 UTC m=+1322.215889851" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.180694 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ncb4h"] Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.313901 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.371220 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.373593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts" (OuterVolumeSpecName: "scripts") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.373742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.373910 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.374059 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.374118 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxj28\" (UniqueName: \"kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.374226 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run\") pod \"aad06c3d-8212-4a61-b491-2af939014fd6\" (UID: \"aad06c3d-8212-4a61-b491-2af939014fd6\") " Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.374982 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.375023 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run" (OuterVolumeSpecName: "var-run") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.375047 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.376556 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.376592 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.413475 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28" (OuterVolumeSpecName: "kube-api-access-pxj28") pod "aad06c3d-8212-4a61-b491-2af939014fd6" (UID: "aad06c3d-8212-4a61-b491-2af939014fd6"). InnerVolumeSpecName "kube-api-access-pxj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.480121 4721 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.480176 4721 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aad06c3d-8212-4a61-b491-2af939014fd6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.480195 4721 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.480207 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxj28\" (UniqueName: \"kubernetes.io/projected/aad06c3d-8212-4a61-b491-2af939014fd6-kube-api-access-pxj28\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.480221 4721 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aad06c3d-8212-4a61-b491-2af939014fd6-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.584675 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-l5h78-config-ffx5v"] Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.601892 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-l5h78-config-ffx5v"] Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.821784 4721 generic.go:334] "Generic (PLEG): container finished" podID="46b707f0-c9cf-46b5-b615-4c0ab1da0391" containerID="fdd34df09b38c8d4122669f8f4cba7de1072b0822ab1da7e4f47ca0b7bdd4576" exitCode=0 Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.824914 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219b-account-create-update-c48ml" event={"ID":"46b707f0-c9cf-46b5-b615-4c0ab1da0391","Type":"ContainerDied","Data":"fdd34df09b38c8d4122669f8f4cba7de1072b0822ab1da7e4f47ca0b7bdd4576"} Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.831110 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d74a2f0-9f60-4f59-92e4-11b9136f1db5" containerID="c60a5b8d8e8a5c3310b5b5e67ee44c42a8f4c0e6c7827776fd046304bab7b307" exitCode=0 Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.831173 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f5e-account-create-update-nk8v6" event={"ID":"0d74a2f0-9f60-4f59-92e4-11b9136f1db5","Type":"ContainerDied","Data":"c60a5b8d8e8a5c3310b5b5e67ee44c42a8f4c0e6c7827776fd046304bab7b307"} Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.833563 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ncb4h" event={"ID":"33682180-e2b1-4c20-a374-1a90e1ccea48","Type":"ContainerStarted","Data":"b46df9c1fd68942af3a0d55f5470efc7009a80b9e946652e536d88fee6b65e76"} Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.851925 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ada9cfeb6a025a91e338410d7105720ab2f63a7cd8ce8f9992f12746a0d195c" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.852106 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5h78-config-ffx5v" Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.862932 4721 generic.go:334] "Generic (PLEG): container finished" podID="375b0aad-b921-41d8-af30-181ac4a73c0b" containerID="b4ecf1fb05394c16a116b372fdffdfa0e7375e6cf9e5a5e825266b2a826c68fd" exitCode=0 Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.863161 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9865-account-create-update-5xd7v" event={"ID":"375b0aad-b921-41d8-af30-181ac4a73c0b","Type":"ContainerDied","Data":"b4ecf1fb05394c16a116b372fdffdfa0e7375e6cf9e5a5e825266b2a826c68fd"} Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.867487 4721 generic.go:334] "Generic (PLEG): container finished" podID="45ad4533-c6a5-49da-8f33-23113f8b7fea" containerID="ebaaea23221e663dcf2fea49186c159f19607ea1b1253a2c8f769a54c895b470" exitCode=0 Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.867770 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xs4g5" event={"ID":"45ad4533-c6a5-49da-8f33-23113f8b7fea","Type":"ContainerDied","Data":"ebaaea23221e663dcf2fea49186c159f19607ea1b1253a2c8f769a54c895b470"} Feb 02 13:23:02 crc kubenswrapper[4721]: I0202 13:23:02.915934 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ncb4h" podStartSLOduration=1.915911707 podStartE2EDuration="1.915911707s" podCreationTimestamp="2026-02-02 13:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:02.891350674 +0000 UTC m=+1323.193865083" watchObservedRunningTime="2026-02-02 13:23:02.915911707 +0000 UTC m=+1323.218426086" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.483980 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2whnq" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.650792 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts\") pod \"937d142a-7868-4de2-85f3-90dcc5a74019\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.651242 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwcg5\" (UniqueName: \"kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5\") pod \"937d142a-7868-4de2-85f3-90dcc5a74019\" (UID: \"937d142a-7868-4de2-85f3-90dcc5a74019\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.653260 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "937d142a-7868-4de2-85f3-90dcc5a74019" (UID: "937d142a-7868-4de2-85f3-90dcc5a74019"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.653382 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/937d142a-7868-4de2-85f3-90dcc5a74019-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.664486 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5" (OuterVolumeSpecName: "kube-api-access-dwcg5") pod "937d142a-7868-4de2-85f3-90dcc5a74019" (UID: "937d142a-7868-4de2-85f3-90dcc5a74019"). InnerVolumeSpecName "kube-api-access-dwcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.755262 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwcg5\" (UniqueName: \"kubernetes.io/projected/937d142a-7868-4de2-85f3-90dcc5a74019-kube-api-access-dwcg5\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.801600 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.815435 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wffvl" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.872080 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xmp7t" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.884422 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wffvl" event={"ID":"13666544-a226-43ee-84c9-3232e9fff8d4","Type":"ContainerDied","Data":"0f7322d4b1604bf0b23f8dfd7fd198bcd3d01cc971e9e7e970f47421793f65ea"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.884469 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7322d4b1604bf0b23f8dfd7fd198bcd3d01cc971e9e7e970f47421793f65ea" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.884539 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wffvl" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.896840 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2whnq" event={"ID":"937d142a-7868-4de2-85f3-90dcc5a74019","Type":"ContainerDied","Data":"ccbde8ddec5b3224b7082f9998adb72d6d3847a838f98d65a4e3c8f153647757"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.896889 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbde8ddec5b3224b7082f9998adb72d6d3847a838f98d65a4e3c8f153647757" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.896957 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2whnq" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.903378 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-15d5-account-create-update-5kl6r" event={"ID":"67f56b66-72ae-4c95-8051-dc5f7a0faec4","Type":"ContainerDied","Data":"44f1c86ebd459ef3f37a415120f2dce3ea862497ab8630574d3be6ad60426784"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.903455 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f1c86ebd459ef3f37a415120f2dce3ea862497ab8630574d3be6ad60426784" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.903529 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-15d5-account-create-update-5kl6r" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.919342 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"4b0459d96819ed1c4f94cc04d9655e4896c56b29043a3b7e3bdc6c07268930ff"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.919384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"a8c25675fe65b31f7b4fc38d6f04f21bc1b4ad8668c35b5c6d3da5a1d9b594f2"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.919397 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"e1d47512b2f14259ddeaad0adf0863d2bd4091f41d5f1ce812357618be0b36ec"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.924757 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xmp7t" event={"ID":"51bc4821-8b8e-4972-a90e-67a7a7b1fee5","Type":"ContainerDied","Data":"5b1beefa4df5e3624727dd6828dfc7ef605a0f5abd94ccfd815abf8f0ff6c96a"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.924833 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b1beefa4df5e3624727dd6828dfc7ef605a0f5abd94ccfd815abf8f0ff6c96a" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.924904 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xmp7t" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.938084 4721 generic.go:334] "Generic (PLEG): container finished" podID="33682180-e2b1-4c20-a374-1a90e1ccea48" containerID="ebcd143f9cd75602d2879409dcec3b4694439187ff4c0fda35cb07bd211f9634" exitCode=0 Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.939698 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ncb4h" event={"ID":"33682180-e2b1-4c20-a374-1a90e1ccea48","Type":"ContainerDied","Data":"ebcd143f9cd75602d2879409dcec3b4694439187ff4c0fda35cb07bd211f9634"} Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.963619 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts\") pod \"13666544-a226-43ee-84c9-3232e9fff8d4\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.963742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bspjq\" (UniqueName: \"kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq\") pod \"13666544-a226-43ee-84c9-3232e9fff8d4\" (UID: \"13666544-a226-43ee-84c9-3232e9fff8d4\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.963797 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts\") pod \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.963859 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrlbg\" (UniqueName: \"kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg\") pod \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.963950 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts\") pod \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\" (UID: \"51bc4821-8b8e-4972-a90e-67a7a7b1fee5\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.964086 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twnj\" (UniqueName: \"kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj\") pod \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\" (UID: \"67f56b66-72ae-4c95-8051-dc5f7a0faec4\") " Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.964154 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13666544-a226-43ee-84c9-3232e9fff8d4" (UID: "13666544-a226-43ee-84c9-3232e9fff8d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.964590 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13666544-a226-43ee-84c9-3232e9fff8d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.965634 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51bc4821-8b8e-4972-a90e-67a7a7b1fee5" (UID: "51bc4821-8b8e-4972-a90e-67a7a7b1fee5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.965822 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67f56b66-72ae-4c95-8051-dc5f7a0faec4" (UID: "67f56b66-72ae-4c95-8051-dc5f7a0faec4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.970521 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj" (OuterVolumeSpecName: "kube-api-access-9twnj") pod "67f56b66-72ae-4c95-8051-dc5f7a0faec4" (UID: "67f56b66-72ae-4c95-8051-dc5f7a0faec4"). InnerVolumeSpecName "kube-api-access-9twnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.970586 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq" (OuterVolumeSpecName: "kube-api-access-bspjq") pod "13666544-a226-43ee-84c9-3232e9fff8d4" (UID: "13666544-a226-43ee-84c9-3232e9fff8d4"). InnerVolumeSpecName "kube-api-access-bspjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:03 crc kubenswrapper[4721]: I0202 13:23:03.976589 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg" (OuterVolumeSpecName: "kube-api-access-mrlbg") pod "51bc4821-8b8e-4972-a90e-67a7a7b1fee5" (UID: "51bc4821-8b8e-4972-a90e-67a7a7b1fee5"). InnerVolumeSpecName "kube-api-access-mrlbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.066113 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9twnj\" (UniqueName: \"kubernetes.io/projected/67f56b66-72ae-4c95-8051-dc5f7a0faec4-kube-api-access-9twnj\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.066440 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bspjq\" (UniqueName: \"kubernetes.io/projected/13666544-a226-43ee-84c9-3232e9fff8d4-kube-api-access-bspjq\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.066451 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f56b66-72ae-4c95-8051-dc5f7a0faec4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.066464 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrlbg\" (UniqueName: \"kubernetes.io/projected/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-kube-api-access-mrlbg\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.066472 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51bc4821-8b8e-4972-a90e-67a7a7b1fee5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.440261 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad06c3d-8212-4a61-b491-2af939014fd6" path="/var/lib/kubelet/pods/aad06c3d-8212-4a61-b491-2af939014fd6/volumes" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.494450 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.581652 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xshcx\" (UniqueName: \"kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx\") pod \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.581768 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts\") pod \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\" (UID: \"46b707f0-c9cf-46b5-b615-4c0ab1da0391\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.582441 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46b707f0-c9cf-46b5-b615-4c0ab1da0391" (UID: "46b707f0-c9cf-46b5-b615-4c0ab1da0391"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.584519 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46b707f0-c9cf-46b5-b615-4c0ab1da0391-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.602381 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx" (OuterVolumeSpecName: "kube-api-access-xshcx") pod "46b707f0-c9cf-46b5-b615-4c0ab1da0391" (UID: "46b707f0-c9cf-46b5-b615-4c0ab1da0391"). InnerVolumeSpecName "kube-api-access-xshcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.698511 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xshcx\" (UniqueName: \"kubernetes.io/projected/46b707f0-c9cf-46b5-b615-4c0ab1da0391-kube-api-access-xshcx\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.799391 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.817619 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xs4g5" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.850384 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.902139 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vckvd\" (UniqueName: \"kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd\") pod \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.902280 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8m4n\" (UniqueName: \"kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n\") pod \"45ad4533-c6a5-49da-8f33-23113f8b7fea\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.902359 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts\") pod \"45ad4533-c6a5-49da-8f33-23113f8b7fea\" (UID: \"45ad4533-c6a5-49da-8f33-23113f8b7fea\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.902408 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts\") pod \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\" (UID: \"0d74a2f0-9f60-4f59-92e4-11b9136f1db5\") " Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.905978 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d74a2f0-9f60-4f59-92e4-11b9136f1db5" (UID: "0d74a2f0-9f60-4f59-92e4-11b9136f1db5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.906541 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45ad4533-c6a5-49da-8f33-23113f8b7fea" (UID: "45ad4533-c6a5-49da-8f33-23113f8b7fea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.918560 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd" (OuterVolumeSpecName: "kube-api-access-vckvd") pod "0d74a2f0-9f60-4f59-92e4-11b9136f1db5" (UID: "0d74a2f0-9f60-4f59-92e4-11b9136f1db5"). InnerVolumeSpecName "kube-api-access-vckvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.921203 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n" (OuterVolumeSpecName: "kube-api-access-f8m4n") pod "45ad4533-c6a5-49da-8f33-23113f8b7fea" (UID: "45ad4533-c6a5-49da-8f33-23113f8b7fea"). InnerVolumeSpecName "kube-api-access-f8m4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.966746 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f5e-account-create-update-nk8v6" event={"ID":"0d74a2f0-9f60-4f59-92e4-11b9136f1db5","Type":"ContainerDied","Data":"169c0d730bc1c83a8cb544a1cea72333ffa248d90f19384ccb9cd170b9f67be0"} Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.966794 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169c0d730bc1c83a8cb544a1cea72333ffa248d90f19384ccb9cd170b9f67be0" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.966875 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f5e-account-create-update-nk8v6" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.968895 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9865-account-create-update-5xd7v" event={"ID":"375b0aad-b921-41d8-af30-181ac4a73c0b","Type":"ContainerDied","Data":"974942fc39aed915ac1ed8e72bf664ad24873fb228a582d3a6ebf0cd4fa2ef59"} Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.968935 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974942fc39aed915ac1ed8e72bf664ad24873fb228a582d3a6ebf0cd4fa2ef59" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.968979 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9865-account-create-update-5xd7v" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.982588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"7168af0bcba40eb18448198e217a748ba97ebaa8e8d4a6c817683c95c2ffa9a2"} Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.985027 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xs4g5" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.985078 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xs4g5" event={"ID":"45ad4533-c6a5-49da-8f33-23113f8b7fea","Type":"ContainerDied","Data":"29e7addf8e62cdc1322715c0687562f2bbdba871b07331508c1e9cd6c6eb1e35"} Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.985114 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e7addf8e62cdc1322715c0687562f2bbdba871b07331508c1e9cd6c6eb1e35" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.995603 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-219b-account-create-update-c48ml" event={"ID":"46b707f0-c9cf-46b5-b615-4c0ab1da0391","Type":"ContainerDied","Data":"f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6"} Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.995656 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f41f8d7a37c5122fc63cc4eb313e0c73b2f929235be1be06f31ae30167258bf6" Feb 02 13:23:04 crc kubenswrapper[4721]: I0202 13:23:04.995695 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-219b-account-create-update-c48ml" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.005965 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92mfj\" (UniqueName: \"kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj\") pod \"375b0aad-b921-41d8-af30-181ac4a73c0b\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.006292 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts\") pod \"375b0aad-b921-41d8-af30-181ac4a73c0b\" (UID: \"375b0aad-b921-41d8-af30-181ac4a73c0b\") " Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.006994 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vckvd\" (UniqueName: \"kubernetes.io/projected/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-kube-api-access-vckvd\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.007020 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8m4n\" (UniqueName: \"kubernetes.io/projected/45ad4533-c6a5-49da-8f33-23113f8b7fea-kube-api-access-f8m4n\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.007034 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ad4533-c6a5-49da-8f33-23113f8b7fea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.007047 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d74a2f0-9f60-4f59-92e4-11b9136f1db5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.007447 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "375b0aad-b921-41d8-af30-181ac4a73c0b" (UID: "375b0aad-b921-41d8-af30-181ac4a73c0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.021498 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj" (OuterVolumeSpecName: "kube-api-access-92mfj") pod "375b0aad-b921-41d8-af30-181ac4a73c0b" (UID: "375b0aad-b921-41d8-af30-181ac4a73c0b"). InnerVolumeSpecName "kube-api-access-92mfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.109773 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375b0aad-b921-41d8-af30-181ac4a73c0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:05 crc kubenswrapper[4721]: I0202 13:23:05.109809 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92mfj\" (UniqueName: \"kubernetes.io/projected/375b0aad-b921-41d8-af30-181ac4a73c0b-kube-api-access-92mfj\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:07 crc kubenswrapper[4721]: I0202 13:23:07.021083 4721 generic.go:334] "Generic (PLEG): container finished" podID="6a34d077-087f-4b04-98c5-22e09450dcb3" containerID="d77970247eaad2de283a9d77316bf0b2e70ed8392f8f537a13730da5d5109f30" exitCode=0 Feb 02 13:23:07 crc kubenswrapper[4721]: I0202 13:23:07.021205 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerDied","Data":"d77970247eaad2de283a9d77316bf0b2e70ed8392f8f537a13730da5d5109f30"} Feb 02 13:23:07 crc kubenswrapper[4721]: I0202 13:23:07.961027 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.047142 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ncb4h" event={"ID":"33682180-e2b1-4c20-a374-1a90e1ccea48","Type":"ContainerDied","Data":"b46df9c1fd68942af3a0d55f5470efc7009a80b9e946652e536d88fee6b65e76"} Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.047185 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46df9c1fd68942af3a0d55f5470efc7009a80b9e946652e536d88fee6b65e76" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.047228 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ncb4h" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.068946 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98wl\" (UniqueName: \"kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl\") pod \"33682180-e2b1-4c20-a374-1a90e1ccea48\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.069027 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts\") pod \"33682180-e2b1-4c20-a374-1a90e1ccea48\" (UID: \"33682180-e2b1-4c20-a374-1a90e1ccea48\") " Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.070141 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33682180-e2b1-4c20-a374-1a90e1ccea48" (UID: "33682180-e2b1-4c20-a374-1a90e1ccea48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.074554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl" (OuterVolumeSpecName: "kube-api-access-v98wl") pod "33682180-e2b1-4c20-a374-1a90e1ccea48" (UID: "33682180-e2b1-4c20-a374-1a90e1ccea48"). InnerVolumeSpecName "kube-api-access-v98wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.172240 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98wl\" (UniqueName: \"kubernetes.io/projected/33682180-e2b1-4c20-a374-1a90e1ccea48-kube-api-access-v98wl\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:08 crc kubenswrapper[4721]: I0202 13:23:08.172276 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33682180-e2b1-4c20-a374-1a90e1ccea48-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:09 crc kubenswrapper[4721]: I0202 13:23:09.080124 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerStarted","Data":"23c6cfe3194c224b23120bb9f19a38bd8ee66c14a40b76a6edd0e87659480c40"} Feb 02 13:23:09 crc kubenswrapper[4721]: I0202 13:23:09.083107 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wvp2n" event={"ID":"3395efa1-7b43-4b48-9e06-764b9428c5ab","Type":"ContainerStarted","Data":"3bcf52e41be39651ef275074658dfd7b224ff525c369aa709ad250ff99c12eb1"} Feb 02 13:23:09 crc kubenswrapper[4721]: I0202 13:23:09.106798 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"c41c7f6eec113dd73d809fc77c2060b653bcff6e3c1a843f75101eb9190fbc63"} Feb 02 13:23:09 crc kubenswrapper[4721]: I0202 13:23:09.116349 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wvp2n" podStartSLOduration=2.6092471809999997 podStartE2EDuration="10.116329123s" podCreationTimestamp="2026-02-02 13:22:59 +0000 UTC" firstStartedPulling="2026-02-02 13:23:00.818689417 +0000 UTC m=+1321.121203806" lastFinishedPulling="2026-02-02 13:23:08.325771369 +0000 UTC m=+1328.628285748" observedRunningTime="2026-02-02 13:23:09.111922064 +0000 UTC m=+1329.414436453" watchObservedRunningTime="2026-02-02 13:23:09.116329123 +0000 UTC m=+1329.418843512" Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.134380 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"722886dac37f34d4c44ff7250903a50d8823214ab95bbc2b6cbc259a35ffc1c0"} Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.134899 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"5fcc5cd4aae3def8d9c518d7b5ab7a2ad186936d52ea53b6af7f7d6ce47ef8c6"} Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.134918 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"991d8d80fab0ad41caf90be0e9e35be6d4e89e176ecbcad61b359ba57cf0ad62"} Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.134933 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"62671d1d620f1f8547f0de767d3b54371754cd3be2a71c8ada676ab96a0c98de"} Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.140184 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hrqtc" event={"ID":"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d","Type":"ContainerStarted","Data":"08a73cbae26287f30a607e9d3bb9b367097d0316dac19c52bb303a922febd87c"} Feb 02 13:23:10 crc kubenswrapper[4721]: I0202 13:23:10.165608 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hrqtc" podStartSLOduration=3.07168417 podStartE2EDuration="36.16558858s" podCreationTimestamp="2026-02-02 13:22:34 +0000 UTC" firstStartedPulling="2026-02-02 13:22:35.813096923 +0000 UTC m=+1296.115611312" lastFinishedPulling="2026-02-02 13:23:08.907001333 +0000 UTC m=+1329.209515722" observedRunningTime="2026-02-02 13:23:10.159508406 +0000 UTC m=+1330.462022795" watchObservedRunningTime="2026-02-02 13:23:10.16558858 +0000 UTC m=+1330.468102969" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.161052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"36f269ef6126c36a0da5dca794d54147445850b21de602541b9b0d786d1c3590"} Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.162412 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eabe6b07-da9d-4980-99b4-12c02640c88d","Type":"ContainerStarted","Data":"6cd48b9830efbc68918f9e68e64aa5bd7d26a8204ef9baf7243cb58a58c32b96"} Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.215652 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.089684651 podStartE2EDuration="50.215615019s" podCreationTimestamp="2026-02-02 13:22:21 +0000 UTC" firstStartedPulling="2026-02-02 13:22:56.199838181 +0000 UTC m=+1316.502352570" lastFinishedPulling="2026-02-02 13:23:08.325768549 +0000 UTC m=+1328.628282938" observedRunningTime="2026-02-02 13:23:11.202860294 +0000 UTC m=+1331.505374693" watchObservedRunningTime="2026-02-02 13:23:11.215615019 +0000 UTC m=+1331.518129408" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.594771 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595307 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bc4821-8b8e-4972-a90e-67a7a7b1fee5" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595328 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bc4821-8b8e-4972-a90e-67a7a7b1fee5" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595353 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad06c3d-8212-4a61-b491-2af939014fd6" containerName="ovn-config" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595363 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad06c3d-8212-4a61-b491-2af939014fd6" containerName="ovn-config" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595377 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33682180-e2b1-4c20-a374-1a90e1ccea48" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595385 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="33682180-e2b1-4c20-a374-1a90e1ccea48" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595397 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d74a2f0-9f60-4f59-92e4-11b9136f1db5" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595407 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d74a2f0-9f60-4f59-92e4-11b9136f1db5" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595425 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937d142a-7868-4de2-85f3-90dcc5a74019" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595433 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="937d142a-7868-4de2-85f3-90dcc5a74019" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595448 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ad4533-c6a5-49da-8f33-23113f8b7fea" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595457 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ad4533-c6a5-49da-8f33-23113f8b7fea" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595474 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13666544-a226-43ee-84c9-3232e9fff8d4" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595482 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="13666544-a226-43ee-84c9-3232e9fff8d4" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595504 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b707f0-c9cf-46b5-b615-4c0ab1da0391" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595511 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b707f0-c9cf-46b5-b615-4c0ab1da0391" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595532 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f56b66-72ae-4c95-8051-dc5f7a0faec4" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595539 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f56b66-72ae-4c95-8051-dc5f7a0faec4" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: E0202 13:23:11.595548 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375b0aad-b921-41d8-af30-181ac4a73c0b" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595556 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="375b0aad-b921-41d8-af30-181ac4a73c0b" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595772 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d74a2f0-9f60-4f59-92e4-11b9136f1db5" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595784 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bc4821-8b8e-4972-a90e-67a7a7b1fee5" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595799 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f56b66-72ae-4c95-8051-dc5f7a0faec4" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595812 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b707f0-c9cf-46b5-b615-4c0ab1da0391" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595822 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ad4533-c6a5-49da-8f33-23113f8b7fea" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595834 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="937d142a-7868-4de2-85f3-90dcc5a74019" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595846 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="33682180-e2b1-4c20-a374-1a90e1ccea48" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595860 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="375b0aad-b921-41d8-af30-181ac4a73c0b" containerName="mariadb-account-create-update" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595873 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad06c3d-8212-4a61-b491-2af939014fd6" containerName="ovn-config" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.595885 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="13666544-a226-43ee-84c9-3232e9fff8d4" containerName="mariadb-database-create" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.597323 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.607298 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.608192 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.671806 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.672313 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xtf\" (UniqueName: \"kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.672716 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.672790 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.672946 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.673003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775366 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775435 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775491 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.775522 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xtf\" (UniqueName: \"kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.776540 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.776552 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.776712 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.776731 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.777301 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:11 crc kubenswrapper[4721]: I0202 13:23:11.968550 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xtf\" (UniqueName: \"kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf\") pod \"dnsmasq-dns-764c5664d7-kr74s\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:12 crc kubenswrapper[4721]: I0202 13:23:12.216919 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:12 crc kubenswrapper[4721]: W0202 13:23:12.725783 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f27ccd0_68e0_47da_a813_83684a0b1787.slice/crio-cb0e1c2486d91a6a369eb95b549c6f528badf3dc4c8e1256da54b5e5a3c88cfd WatchSource:0}: Error finding container cb0e1c2486d91a6a369eb95b549c6f528badf3dc4c8e1256da54b5e5a3c88cfd: Status 404 returned error can't find the container with id cb0e1c2486d91a6a369eb95b549c6f528badf3dc4c8e1256da54b5e5a3c88cfd Feb 02 13:23:12 crc kubenswrapper[4721]: I0202 13:23:12.725868 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:12 crc kubenswrapper[4721]: I0202 13:23:12.840043 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ncb4h"] Feb 02 13:23:12 crc kubenswrapper[4721]: I0202 13:23:12.849025 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ncb4h"] Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.194000 4721 generic.go:334] "Generic (PLEG): container finished" podID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerID="cf43302a7f3026dcc314dce1f4dfee24a1b2e493151bf89876aef92fa2f944d8" exitCode=0 Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.194222 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" event={"ID":"3f27ccd0-68e0-47da-a813-83684a0b1787","Type":"ContainerDied","Data":"cf43302a7f3026dcc314dce1f4dfee24a1b2e493151bf89876aef92fa2f944d8"} Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.194271 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" event={"ID":"3f27ccd0-68e0-47da-a813-83684a0b1787","Type":"ContainerStarted","Data":"cb0e1c2486d91a6a369eb95b549c6f528badf3dc4c8e1256da54b5e5a3c88cfd"} Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.202406 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerStarted","Data":"00cace9833aaa02756c80cc7ba5c1b7b04be74037fc31aabefc226265bb46bd7"} Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.202459 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6a34d077-087f-4b04-98c5-22e09450dcb3","Type":"ContainerStarted","Data":"62357bb4f0d496ffbef788e049aa3d489bf0f7675e4270d9738c7403ce7db9f5"} Feb 02 13:23:13 crc kubenswrapper[4721]: I0202 13:23:13.267160 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.267138734 podStartE2EDuration="19.267138734s" podCreationTimestamp="2026-02-02 13:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:13.25959828 +0000 UTC m=+1333.562112679" watchObservedRunningTime="2026-02-02 13:23:13.267138734 +0000 UTC m=+1333.569653123" Feb 02 13:23:14 crc kubenswrapper[4721]: I0202 13:23:14.214360 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" event={"ID":"3f27ccd0-68e0-47da-a813-83684a0b1787","Type":"ContainerStarted","Data":"69ee8084b0c9e06457f722be7465dbe0e100403673683c49147169903cf21344"} Feb 02 13:23:14 crc kubenswrapper[4721]: I0202 13:23:14.234234 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" podStartSLOduration=3.2342083 podStartE2EDuration="3.2342083s" podCreationTimestamp="2026-02-02 13:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:14.230357706 +0000 UTC m=+1334.532872095" watchObservedRunningTime="2026-02-02 13:23:14.2342083 +0000 UTC m=+1334.536722689" Feb 02 13:23:14 crc kubenswrapper[4721]: I0202 13:23:14.423510 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33682180-e2b1-4c20-a374-1a90e1ccea48" path="/var/lib/kubelet/pods/33682180-e2b1-4c20-a374-1a90e1ccea48/volumes" Feb 02 13:23:14 crc kubenswrapper[4721]: I0202 13:23:14.706200 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 02 13:23:15 crc kubenswrapper[4721]: I0202 13:23:15.223533 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:16 crc kubenswrapper[4721]: I0202 13:23:16.234633 4721 generic.go:334] "Generic (PLEG): container finished" podID="3395efa1-7b43-4b48-9e06-764b9428c5ab" containerID="3bcf52e41be39651ef275074658dfd7b224ff525c369aa709ad250ff99c12eb1" exitCode=0 Feb 02 13:23:16 crc kubenswrapper[4721]: I0202 13:23:16.234737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wvp2n" event={"ID":"3395efa1-7b43-4b48-9e06-764b9428c5ab","Type":"ContainerDied","Data":"3bcf52e41be39651ef275074658dfd7b224ff525c369aa709ad250ff99c12eb1"} Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.747975 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.870164 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h675n"] Feb 02 13:23:17 crc kubenswrapper[4721]: E0202 13:23:17.870626 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3395efa1-7b43-4b48-9e06-764b9428c5ab" containerName="keystone-db-sync" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.870644 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3395efa1-7b43-4b48-9e06-764b9428c5ab" containerName="keystone-db-sync" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.870857 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3395efa1-7b43-4b48-9e06-764b9428c5ab" containerName="keystone-db-sync" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.871547 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h675n" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.876745 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.882144 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h675n"] Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.912575 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppjqv\" (UniqueName: \"kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv\") pod \"3395efa1-7b43-4b48-9e06-764b9428c5ab\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.912690 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data\") pod \"3395efa1-7b43-4b48-9e06-764b9428c5ab\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.912807 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle\") pod \"3395efa1-7b43-4b48-9e06-764b9428c5ab\" (UID: \"3395efa1-7b43-4b48-9e06-764b9428c5ab\") " Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.929500 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv" (OuterVolumeSpecName: "kube-api-access-ppjqv") pod "3395efa1-7b43-4b48-9e06-764b9428c5ab" (UID: "3395efa1-7b43-4b48-9e06-764b9428c5ab"). InnerVolumeSpecName "kube-api-access-ppjqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.949134 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3395efa1-7b43-4b48-9e06-764b9428c5ab" (UID: "3395efa1-7b43-4b48-9e06-764b9428c5ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:17 crc kubenswrapper[4721]: I0202 13:23:17.970900 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data" (OuterVolumeSpecName: "config-data") pod "3395efa1-7b43-4b48-9e06-764b9428c5ab" (UID: "3395efa1-7b43-4b48-9e06-764b9428c5ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.016623 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvhpv\" (UniqueName: \"kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.016811 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.017195 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.017230 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3395efa1-7b43-4b48-9e06-764b9428c5ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.017246 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppjqv\" (UniqueName: \"kubernetes.io/projected/3395efa1-7b43-4b48-9e06-764b9428c5ab-kube-api-access-ppjqv\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.118480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvhpv\" (UniqueName: \"kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.118570 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.119510 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.137700 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvhpv\" (UniqueName: \"kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv\") pod \"root-account-create-update-h675n\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.185996 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h675n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.256884 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wvp2n" event={"ID":"3395efa1-7b43-4b48-9e06-764b9428c5ab","Type":"ContainerDied","Data":"9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa"} Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.256940 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4595f760619b3ba238fcbf4e59500ea2d92f85648541e4ad855bfe4ec9d3fa" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.257011 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wvp2n" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.535570 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.536196 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="dnsmasq-dns" containerID="cri-o://69ee8084b0c9e06457f722be7465dbe0e100403673683c49147169903cf21344" gracePeriod=10 Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.540491 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.625200 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.627531 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.646140 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l7z2q"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.647685 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.650505 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.650695 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z9s7" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.650853 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.651143 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.651303 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.656885 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.729463 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l7z2q"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.735783 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741470 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741683 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741730 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.741774 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.742043 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.742306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.742363 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.742410 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqjs\" (UniqueName: \"kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.742458 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trvt\" (UniqueName: \"kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.757019 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-n52pp"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.760382 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.763846 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.764246 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-ldgvp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.770780 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n52pp"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.837153 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h675n"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858254 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858343 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858444 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858595 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858739 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858806 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858832 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858885 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqjs\" (UniqueName: \"kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.858927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2trvt\" (UniqueName: \"kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859032 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859087 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct7f\" (UniqueName: \"kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859131 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859164 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859238 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.859300 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.860409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.869710 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.878652 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.879589 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.891239 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.911649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.913694 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.914056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.916565 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2tnbk"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.919311 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.929391 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4l6jw" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.929662 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.929817 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.932727 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.936762 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.955279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2trvt\" (UniqueName: \"kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt\") pod \"dnsmasq-dns-5959f8865f-htxgd\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.962511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.962779 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct7f\" (UniqueName: \"kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.962955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.973027 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7wjxh"] Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.984741 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.995726 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqjs\" (UniqueName: \"kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs\") pod \"keystone-bootstrap-l7z2q\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:18 crc kubenswrapper[4721]: I0202 13:23:18.996236 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:18.998904 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.003637 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v76dv" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.003749 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.004897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.038923 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7wjxh"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.045356 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct7f\" (UniqueName: \"kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f\") pod \"heat-db-sync-n52pp\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.061177 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2tnbk"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.061485 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.066786 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.066856 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4b6x\" (UniqueName: \"kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.066918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.067038 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.067089 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.067122 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.070285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszgd\" (UniqueName: \"kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.070376 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.070522 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.148695 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.166029 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-86z2v"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.168350 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173117 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173198 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173236 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4b6x\" (UniqueName: \"kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173294 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173422 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173469 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173518 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173556 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszgd\" (UniqueName: \"kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.173594 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.185533 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.186008 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tfr2p" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.186343 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.188131 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.191608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.219784 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.222186 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-86z2v"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.222621 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.225532 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.226240 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.234502 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.235185 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.236844 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4b6x\" (UniqueName: \"kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x\") pod \"cinder-db-sync-7wjxh\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.249421 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.258655 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszgd\" (UniqueName: \"kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd\") pod \"neutron-db-sync-2tnbk\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.263392 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.283812 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.300912 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h675n" event={"ID":"71ef45b1-9ff2-40ca-950a-07746f51eca9","Type":"ContainerStarted","Data":"3cbb26dd329c92faeeafbf64445c9a0bfc1db825e69815225692a1b41aaa4b51"} Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.303297 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8kv\" (UniqueName: \"kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.303433 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.303459 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.303496 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.303718 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.331408 4721 generic.go:334] "Generic (PLEG): container finished" podID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerID="69ee8084b0c9e06457f722be7465dbe0e100403673683c49147169903cf21344" exitCode=0 Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.331485 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cgqfl"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.333260 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" event={"ID":"3f27ccd0-68e0-47da-a813-83684a0b1787","Type":"ContainerDied","Data":"69ee8084b0c9e06457f722be7465dbe0e100403673683c49147169903cf21344"} Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.333377 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.342618 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-27rl5" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.342898 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.359977 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cgqfl"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407522 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407563 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407629 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407650 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64pt\" (UniqueName: \"kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407687 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407708 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407733 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407773 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8kv\" (UniqueName: \"kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407836 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777ht\" (UniqueName: \"kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.407923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.408450 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.426899 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.430153 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.432124 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.436999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8kv\" (UniqueName: \"kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv\") pod \"placement-db-sync-86z2v\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512380 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512424 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512445 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64pt\" (UniqueName: \"kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512474 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512504 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512533 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777ht\" (UniqueName: \"kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.512682 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.513744 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.513771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.513956 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.514773 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.525567 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.539004 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.541643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.564601 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64pt\" (UniqueName: \"kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt\") pod \"dnsmasq-dns-58dd9ff6bc-f8lxs\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.568689 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n52pp" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.570801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777ht\" (UniqueName: \"kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht\") pod \"barbican-db-sync-cgqfl\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.579298 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.592596 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.607721 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.630415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.775963 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.780399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.785562 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.799249 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.804634 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.927426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.927917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.927956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.927999 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbzr\" (UniqueName: \"kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.928051 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.928135 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.928285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:19 crc kubenswrapper[4721]: I0202 13:23:19.959192 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032036 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032195 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032245 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbzr\" (UniqueName: \"kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032270 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.032332 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.033095 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.041990 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.048620 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.068224 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.104105 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.106103 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.123009 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.136142 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xtf\" (UniqueName: \"kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.136376 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.138198 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.138237 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.139913 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbzr\" (UniqueName: \"kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr\") pod \"ceilometer-0\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.139931 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.140216 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb\") pod \"3f27ccd0-68e0-47da-a813-83684a0b1787\" (UID: \"3f27ccd0-68e0-47da-a813-83684a0b1787\") " Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.155882 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.162472 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf" (OuterVolumeSpecName: "kube-api-access-v5xtf") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "kube-api-access-v5xtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.264704 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l7z2q"] Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.268312 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5xtf\" (UniqueName: \"kubernetes.io/projected/3f27ccd0-68e0-47da-a813-83684a0b1787-kube-api-access-v5xtf\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.475114 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config" (OuterVolumeSpecName: "config") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.478033 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.488271 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.513983 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.569288 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.569937 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.600207 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.600259 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.600275 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.643427 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kr74s" event={"ID":"3f27ccd0-68e0-47da-a813-83684a0b1787","Type":"ContainerDied","Data":"cb0e1c2486d91a6a369eb95b549c6f528badf3dc4c8e1256da54b5e5a3c88cfd"} Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.643479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7z2q" event={"ID":"af567124-fd0c-420e-b79b-41e8a7140cef","Type":"ContainerStarted","Data":"0b6290d6ebded27a4f316246821ca1ecc86a05d55f3135936f20ff3e7cfb9766"} Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.643496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h675n" event={"ID":"71ef45b1-9ff2-40ca-950a-07746f51eca9","Type":"ContainerStarted","Data":"4fe33e5a77c87b36e6e67a1b771c77b2a949e44a54e8c2f88295e47c2b68d215"} Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.643544 4721 scope.go:117] "RemoveContainer" containerID="69ee8084b0c9e06457f722be7465dbe0e100403673683c49147169903cf21344" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.705900 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.827535 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3f27ccd0-68e0-47da-a813-83684a0b1787" (UID: "3f27ccd0-68e0-47da-a813-83684a0b1787"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.830848 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f27ccd0-68e0-47da-a813-83684a0b1787-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.875131 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h675n" podStartSLOduration=3.875044902 podStartE2EDuration="3.875044902s" podCreationTimestamp="2026-02-02 13:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:20.62786265 +0000 UTC m=+1340.930377039" watchObservedRunningTime="2026-02-02 13:23:20.875044902 +0000 UTC m=+1341.177559311" Feb 02 13:23:20 crc kubenswrapper[4721]: I0202 13:23:20.904456 4721 scope.go:117] "RemoveContainer" containerID="cf43302a7f3026dcc314dce1f4dfee24a1b2e493151bf89876aef92fa2f944d8" Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.153373 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.177368 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kr74s"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.213552 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2tnbk"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.313515 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-86z2v"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.335171 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7wjxh"] Feb 02 13:23:21 crc kubenswrapper[4721]: W0202 13:23:21.380334 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3578ef_5d1b_4c52_939c_237feadc1c5c.slice/crio-85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd WatchSource:0}: Error finding container 85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd: Status 404 returned error can't find the container with id 85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.618060 4721 generic.go:334] "Generic (PLEG): container finished" podID="71ef45b1-9ff2-40ca-950a-07746f51eca9" containerID="4fe33e5a77c87b36e6e67a1b771c77b2a949e44a54e8c2f88295e47c2b68d215" exitCode=0 Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.618422 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h675n" event={"ID":"71ef45b1-9ff2-40ca-950a-07746f51eca9","Type":"ContainerDied","Data":"4fe33e5a77c87b36e6e67a1b771c77b2a949e44a54e8c2f88295e47c2b68d215"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.662625 4721 generic.go:334] "Generic (PLEG): container finished" podID="42cb85a8-831b-4a92-936b-d79276a2d1e5" containerID="23e7e8b91a2ef6d2365bf404f515402fb59b28d83b6dfc69437b9e9833746052" exitCode=0 Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.662738 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" event={"ID":"42cb85a8-831b-4a92-936b-d79276a2d1e5","Type":"ContainerDied","Data":"23e7e8b91a2ef6d2365bf404f515402fb59b28d83b6dfc69437b9e9833746052"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.662772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" event={"ID":"42cb85a8-831b-4a92-936b-d79276a2d1e5","Type":"ContainerStarted","Data":"02de98de50d044af23bd1e24a46d0eb4f77865743e664633928ca4a36e2c50f9"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.671867 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2tnbk" event={"ID":"026bbe7a-aec9-40ee-9be3-cdb35054e076","Type":"ContainerStarted","Data":"3095e5ccc72506309a2adf6e3ceae18778d2c5f94dff5364525934f800f177aa"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.673403 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n52pp"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.675042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7z2q" event={"ID":"af567124-fd0c-420e-b79b-41e8a7140cef","Type":"ContainerStarted","Data":"59e47087e25d7a69cc9b0e24b51c0193c1d130de3a6fbb82bf929574bc9e38b6"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.695948 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86z2v" event={"ID":"bdd67c16-7130-4095-952f-006aa5bcd5bb","Type":"ContainerStarted","Data":"42fcaa03785c97d077f0697f3384b2a0825c5d2c67e774961ea3ccb33c705ea9"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.698875 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7wjxh" event={"ID":"ad3578ef-5d1b-4c52-939c-237feadc1c5c","Type":"ContainerStarted","Data":"85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd"} Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.703103 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:23:21 crc kubenswrapper[4721]: W0202 13:23:21.732514 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a4176b_5f58_47a9_a614_e5d05526da18.slice/crio-0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79 WatchSource:0}: Error finding container 0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79: Status 404 returned error can't find the container with id 0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79 Feb 02 13:23:21 crc kubenswrapper[4721]: W0202 13:23:21.756921 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42af4b6d_a3ac_4a90_8338_71dcdba65713.slice/crio-456048e529dbcb77018460d463790172fa02060ea250e5dff1aa993f12b0810c WatchSource:0}: Error finding container 456048e529dbcb77018460d463790172fa02060ea250e5dff1aa993f12b0810c: Status 404 returned error can't find the container with id 456048e529dbcb77018460d463790172fa02060ea250e5dff1aa993f12b0810c Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.781865 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cgqfl"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.806618 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:21 crc kubenswrapper[4721]: I0202 13:23:21.812098 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l7z2q" podStartSLOduration=3.812050175 podStartE2EDuration="3.812050175s" podCreationTimestamp="2026-02-02 13:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:21.721541539 +0000 UTC m=+1342.024055938" watchObservedRunningTime="2026-02-02 13:23:21.812050175 +0000 UTC m=+1342.114564564" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.175987 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235409 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235462 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235486 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235597 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2trvt\" (UniqueName: \"kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235622 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.235693 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc\") pod \"42cb85a8-831b-4a92-936b-d79276a2d1e5\" (UID: \"42cb85a8-831b-4a92-936b-d79276a2d1e5\") " Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.247237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt" (OuterVolumeSpecName: "kube-api-access-2trvt") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "kube-api-access-2trvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.281667 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.313648 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config" (OuterVolumeSpecName: "config") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.315663 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.317839 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.324803 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42cb85a8-831b-4a92-936b-d79276a2d1e5" (UID: "42cb85a8-831b-4a92-936b-d79276a2d1e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343575 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2trvt\" (UniqueName: \"kubernetes.io/projected/42cb85a8-831b-4a92-936b-d79276a2d1e5-kube-api-access-2trvt\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343609 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343619 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343631 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343642 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.343650 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42cb85a8-831b-4a92-936b-d79276a2d1e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.436618 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" path="/var/lib/kubelet/pods/3f27ccd0-68e0-47da-a813-83684a0b1787/volumes" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.438019 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.717454 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" event={"ID":"42cb85a8-831b-4a92-936b-d79276a2d1e5","Type":"ContainerDied","Data":"02de98de50d044af23bd1e24a46d0eb4f77865743e664633928ca4a36e2c50f9"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.717772 4721 scope.go:117] "RemoveContainer" containerID="23e7e8b91a2ef6d2365bf404f515402fb59b28d83b6dfc69437b9e9833746052" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.717612 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-htxgd" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.722712 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2tnbk" event={"ID":"026bbe7a-aec9-40ee-9be3-cdb35054e076","Type":"ContainerStarted","Data":"759c834b6a1cc62188124483c6831d2bab037f76c9aac624de4118e2066fe35a"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.730731 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cgqfl" event={"ID":"47a4176b-5f58-47a9-a614-e5d05526da18","Type":"ContainerStarted","Data":"0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.734219 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n52pp" event={"ID":"9fa244a8-7588-4d87-bd5b-cbcd10780c83","Type":"ContainerStarted","Data":"f8c8f50f7ba68fd272f974a962b3d958bc797bb63b6619f9ead2e2ffc4525a32"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.737963 4721 generic.go:334] "Generic (PLEG): container finished" podID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerID="5e753861212a75a6fc0b9472a29c53a493e744b8cbd4bb8e6b1dda52762f0e28" exitCode=0 Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.738035 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" event={"ID":"42af4b6d-a3ac-4a90-8338-71dcdba65713","Type":"ContainerDied","Data":"5e753861212a75a6fc0b9472a29c53a493e744b8cbd4bb8e6b1dda52762f0e28"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.738145 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" event={"ID":"42af4b6d-a3ac-4a90-8338-71dcdba65713","Type":"ContainerStarted","Data":"456048e529dbcb77018460d463790172fa02060ea250e5dff1aa993f12b0810c"} Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.744734 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2tnbk" podStartSLOduration=4.74471589 podStartE2EDuration="4.74471589s" podCreationTimestamp="2026-02-02 13:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:22.744007472 +0000 UTC m=+1343.046521861" watchObservedRunningTime="2026-02-02 13:23:22.74471589 +0000 UTC m=+1343.047230279" Feb 02 13:23:22 crc kubenswrapper[4721]: I0202 13:23:22.753419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerStarted","Data":"5372b8d7305b4393c88e00abc4c50b4b02eb1dd6564a279f35710dd3d18e6691"} Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.039146 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.064616 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-htxgd"] Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.774900 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h675n" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.779240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h675n" event={"ID":"71ef45b1-9ff2-40ca-950a-07746f51eca9","Type":"ContainerDied","Data":"3cbb26dd329c92faeeafbf64445c9a0bfc1db825e69815225692a1b41aaa4b51"} Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.779298 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbb26dd329c92faeeafbf64445c9a0bfc1db825e69815225692a1b41aaa4b51" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.802415 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" event={"ID":"42af4b6d-a3ac-4a90-8338-71dcdba65713","Type":"ContainerStarted","Data":"bc03b54bbb02a10feb6c590756e051033c53cca1bb5f2262c3ad8498b1f79580"} Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.802985 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.831143 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podStartSLOduration=4.831126073 podStartE2EDuration="4.831126073s" podCreationTimestamp="2026-02-02 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:23.818229814 +0000 UTC m=+1344.120744213" watchObservedRunningTime="2026-02-02 13:23:23.831126073 +0000 UTC m=+1344.133640462" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.919370 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts\") pod \"71ef45b1-9ff2-40ca-950a-07746f51eca9\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.919736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvhpv\" (UniqueName: \"kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv\") pod \"71ef45b1-9ff2-40ca-950a-07746f51eca9\" (UID: \"71ef45b1-9ff2-40ca-950a-07746f51eca9\") " Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.920930 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71ef45b1-9ff2-40ca-950a-07746f51eca9" (UID: "71ef45b1-9ff2-40ca-950a-07746f51eca9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.924045 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ef45b1-9ff2-40ca-950a-07746f51eca9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:23 crc kubenswrapper[4721]: I0202 13:23:23.951331 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv" (OuterVolumeSpecName: "kube-api-access-rvhpv") pod "71ef45b1-9ff2-40ca-950a-07746f51eca9" (UID: "71ef45b1-9ff2-40ca-950a-07746f51eca9"). InnerVolumeSpecName "kube-api-access-rvhpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.025665 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvhpv\" (UniqueName: \"kubernetes.io/projected/71ef45b1-9ff2-40ca-950a-07746f51eca9-kube-api-access-rvhpv\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.433235 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cb85a8-831b-4a92-936b-d79276a2d1e5" path="/var/lib/kubelet/pods/42cb85a8-831b-4a92-936b-d79276a2d1e5/volumes" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.705910 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.713665 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.844316 4721 generic.go:334] "Generic (PLEG): container finished" podID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" containerID="08a73cbae26287f30a607e9d3bb9b367097d0316dac19c52bb303a922febd87c" exitCode=0 Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.844500 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h675n" Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.845052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hrqtc" event={"ID":"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d","Type":"ContainerDied","Data":"08a73cbae26287f30a607e9d3bb9b367097d0316dac19c52bb303a922febd87c"} Feb 02 13:23:24 crc kubenswrapper[4721]: I0202 13:23:24.861431 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 02 13:23:27 crc kubenswrapper[4721]: I0202 13:23:27.890194 4721 generic.go:334] "Generic (PLEG): container finished" podID="af567124-fd0c-420e-b79b-41e8a7140cef" containerID="59e47087e25d7a69cc9b0e24b51c0193c1d130de3a6fbb82bf929574bc9e38b6" exitCode=0 Feb 02 13:23:27 crc kubenswrapper[4721]: I0202 13:23:27.890305 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7z2q" event={"ID":"af567124-fd0c-420e-b79b-41e8a7140cef","Type":"ContainerDied","Data":"59e47087e25d7a69cc9b0e24b51c0193c1d130de3a6fbb82bf929574bc9e38b6"} Feb 02 13:23:27 crc kubenswrapper[4721]: I0202 13:23:27.894497 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hrqtc" event={"ID":"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d","Type":"ContainerDied","Data":"bcc0541c75c4c63b75ba7d9003f7cf6d54e2e725d204aed88938ca8246ddf26a"} Feb 02 13:23:27 crc kubenswrapper[4721]: I0202 13:23:27.894550 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc0541c75c4c63b75ba7d9003f7cf6d54e2e725d204aed88938ca8246ddf26a" Feb 02 13:23:27 crc kubenswrapper[4721]: I0202 13:23:27.908462 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hrqtc" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.025320 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle\") pod \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.025401 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data\") pod \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.025508 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data\") pod \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.025584 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5rlj\" (UniqueName: \"kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj\") pod \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\" (UID: \"0531b398-2d44-42c2-bd6c-9e9f7ab8c85d\") " Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.030798 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" (UID: "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.031409 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj" (OuterVolumeSpecName: "kube-api-access-s5rlj") pod "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" (UID: "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d"). InnerVolumeSpecName "kube-api-access-s5rlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.058102 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" (UID: "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.087649 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data" (OuterVolumeSpecName: "config-data") pod "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" (UID: "0531b398-2d44-42c2-bd6c-9e9f7ab8c85d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.129187 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.129233 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5rlj\" (UniqueName: \"kubernetes.io/projected/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-kube-api-access-s5rlj\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.129248 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.129260 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:28 crc kubenswrapper[4721]: I0202 13:23:28.909630 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hrqtc" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.403738 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.403960 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" containerID="cri-o://bc03b54bbb02a10feb6c590756e051033c53cca1bb5f2262c3ad8498b1f79580" gracePeriod=10 Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.406262 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.506620 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:29 crc kubenswrapper[4721]: E0202 13:23:29.507244 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" containerName="glance-db-sync" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507265 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" containerName="glance-db-sync" Feb 02 13:23:29 crc kubenswrapper[4721]: E0202 13:23:29.507295 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cb85a8-831b-4a92-936b-d79276a2d1e5" containerName="init" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507303 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cb85a8-831b-4a92-936b-d79276a2d1e5" containerName="init" Feb 02 13:23:29 crc kubenswrapper[4721]: E0202 13:23:29.507320 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="init" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507328 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="init" Feb 02 13:23:29 crc kubenswrapper[4721]: E0202 13:23:29.507344 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="dnsmasq-dns" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507351 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="dnsmasq-dns" Feb 02 13:23:29 crc kubenswrapper[4721]: E0202 13:23:29.507366 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef45b1-9ff2-40ca-950a-07746f51eca9" containerName="mariadb-account-create-update" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507374 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef45b1-9ff2-40ca-950a-07746f51eca9" containerName="mariadb-account-create-update" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507619 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cb85a8-831b-4a92-936b-d79276a2d1e5" containerName="init" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507647 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef45b1-9ff2-40ca-950a-07746f51eca9" containerName="mariadb-account-create-update" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507675 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f27ccd0-68e0-47da-a813-83684a0b1787" containerName="dnsmasq-dns" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.507689 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" containerName="glance-db-sync" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.509026 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.530732 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.585600 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.585725 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.585753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.585882 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.586082 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.586198 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sws9d\" (UniqueName: \"kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.631497 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688359 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688447 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688525 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sws9d\" (UniqueName: \"kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.688620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.689118 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.689418 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.689740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.689853 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.690953 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.747026 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sws9d\" (UniqueName: \"kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d\") pod \"dnsmasq-dns-785d8bcb8c-f9ml5\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.868863 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.929266 4721 generic.go:334] "Generic (PLEG): container finished" podID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerID="bc03b54bbb02a10feb6c590756e051033c53cca1bb5f2262c3ad8498b1f79580" exitCode=0 Feb 02 13:23:29 crc kubenswrapper[4721]: I0202 13:23:29.929307 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" event={"ID":"42af4b6d-a3ac-4a90-8338-71dcdba65713","Type":"ContainerDied","Data":"bc03b54bbb02a10feb6c590756e051033c53cca1bb5f2262c3ad8498b1f79580"} Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.242136 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.244737 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.247116 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.247214 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.247367 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f7kg2" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.258409 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302580 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302731 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302765 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8rf\" (UniqueName: \"kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302855 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302896 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.302918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8rf\" (UniqueName: \"kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405385 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405412 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405447 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405559 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.405592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.406919 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.408682 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.410906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.411371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.411667 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.411735 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c7812605d9919b226d4340fce797cd8fb18c9c948d1e68864aa7eb7aeecf4816/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.412470 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.434291 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8rf\" (UniqueName: \"kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.477975 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.665408 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.912667 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.914386 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.916740 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:23:30 crc kubenswrapper[4721]: I0202 13:23:30.939314 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.021472 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.021904 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.021997 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.022132 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.022171 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.022193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.022235 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgt6x\" (UniqueName: \"kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.124822 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.124870 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.124956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.125060 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.125100 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.125115 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.125149 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgt6x\" (UniqueName: \"kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.125702 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.127162 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.132243 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.132286 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f813ebfbde533117d7c5539927c335132efd30cff3e1cb355d78cb9d4c1a927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.136316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.138612 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.145505 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgt6x\" (UniqueName: \"kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.146105 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.181662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:31 crc kubenswrapper[4721]: I0202 13:23:31.242821 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:32 crc kubenswrapper[4721]: I0202 13:23:32.240868 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:32 crc kubenswrapper[4721]: I0202 13:23:32.331302 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:34 crc kubenswrapper[4721]: I0202 13:23:34.631884 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 02 13:23:39 crc kubenswrapper[4721]: I0202 13:23:39.634952 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 02 13:23:39 crc kubenswrapper[4721]: I0202 13:23:39.635471 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:43 crc kubenswrapper[4721]: I0202 13:23:43.129549 4721 generic.go:334] "Generic (PLEG): container finished" podID="026bbe7a-aec9-40ee-9be3-cdb35054e076" containerID="759c834b6a1cc62188124483c6831d2bab037f76c9aac624de4118e2066fe35a" exitCode=0 Feb 02 13:23:43 crc kubenswrapper[4721]: I0202 13:23:43.129650 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2tnbk" event={"ID":"026bbe7a-aec9-40ee-9be3-cdb35054e076","Type":"ContainerDied","Data":"759c834b6a1cc62188124483c6831d2bab037f76c9aac624de4118e2066fe35a"} Feb 02 13:23:44 crc kubenswrapper[4721]: I0202 13:23:44.631492 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.879845 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.955665 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.955794 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzqjs\" (UniqueName: \"kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.955858 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.955940 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.956012 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.956212 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle\") pod \"af567124-fd0c-420e-b79b-41e8a7140cef\" (UID: \"af567124-fd0c-420e-b79b-41e8a7140cef\") " Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.964049 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts" (OuterVolumeSpecName: "scripts") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.964130 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.964249 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.965118 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs" (OuterVolumeSpecName: "kube-api-access-hzqjs") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "kube-api-access-hzqjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.984823 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:47 crc kubenswrapper[4721]: I0202 13:23:47.985840 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data" (OuterVolumeSpecName: "config-data") pod "af567124-fd0c-420e-b79b-41e8a7140cef" (UID: "af567124-fd0c-420e-b79b-41e8a7140cef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059830 4721 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059894 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059907 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059920 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzqjs\" (UniqueName: \"kubernetes.io/projected/af567124-fd0c-420e-b79b-41e8a7140cef-kube-api-access-hzqjs\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059933 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.059945 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af567124-fd0c-420e-b79b-41e8a7140cef-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.193656 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l7z2q" event={"ID":"af567124-fd0c-420e-b79b-41e8a7140cef","Type":"ContainerDied","Data":"0b6290d6ebded27a4f316246821ca1ecc86a05d55f3135936f20ff3e7cfb9766"} Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.193741 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6290d6ebded27a4f316246821ca1ecc86a05d55f3135936f20ff3e7cfb9766" Feb 02 13:23:48 crc kubenswrapper[4721]: I0202 13:23:48.193848 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l7z2q" Feb 02 13:23:48 crc kubenswrapper[4721]: E0202 13:23:48.534341 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 02 13:23:48 crc kubenswrapper[4721]: E0202 13:23:48.534850 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ct7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-n52pp_openstack(9fa244a8-7588-4d87-bd5b-cbcd10780c83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:23:48 crc kubenswrapper[4721]: E0202 13:23:48.536572 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-n52pp" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" Feb 02 13:23:48 crc kubenswrapper[4721]: E0202 13:23:48.910435 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 02 13:23:48 crc kubenswrapper[4721]: E0202 13:23:48.910684 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n585hd7h67dhb8h8fh586h5ffhfch5d4hd9h647h545h5fchd4h56bh5b4h9ch5d8h95h67fh555h656hcfhcdhbch5cdhd4hbh564hbbhd7h669q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbbzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.007534 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l7z2q"] Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.016059 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l7z2q"] Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.027769 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.087203 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config\") pod \"026bbe7a-aec9-40ee-9be3-cdb35054e076\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.087383 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kszgd\" (UniqueName: \"kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd\") pod \"026bbe7a-aec9-40ee-9be3-cdb35054e076\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.087518 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle\") pod \"026bbe7a-aec9-40ee-9be3-cdb35054e076\" (UID: \"026bbe7a-aec9-40ee-9be3-cdb35054e076\") " Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.102406 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dw7nl"] Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.102684 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd" (OuterVolumeSpecName: "kube-api-access-kszgd") pod "026bbe7a-aec9-40ee-9be3-cdb35054e076" (UID: "026bbe7a-aec9-40ee-9be3-cdb35054e076"). InnerVolumeSpecName "kube-api-access-kszgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.103114 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af567124-fd0c-420e-b79b-41e8a7140cef" containerName="keystone-bootstrap" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.103267 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="af567124-fd0c-420e-b79b-41e8a7140cef" containerName="keystone-bootstrap" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.103344 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026bbe7a-aec9-40ee-9be3-cdb35054e076" containerName="neutron-db-sync" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.103397 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="026bbe7a-aec9-40ee-9be3-cdb35054e076" containerName="neutron-db-sync" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.103657 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="af567124-fd0c-420e-b79b-41e8a7140cef" containerName="keystone-bootstrap" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.103722 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="026bbe7a-aec9-40ee-9be3-cdb35054e076" containerName="neutron-db-sync" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.104612 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.107222 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.107968 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.108520 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.112418 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.113029 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z9s7" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.133158 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dw7nl"] Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.134191 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "026bbe7a-aec9-40ee-9be3-cdb35054e076" (UID: "026bbe7a-aec9-40ee-9be3-cdb35054e076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.156253 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config" (OuterVolumeSpecName: "config") pod "026bbe7a-aec9-40ee-9be3-cdb35054e076" (UID: "026bbe7a-aec9-40ee-9be3-cdb35054e076"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.192512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.193372 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.193566 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdbz\" (UniqueName: \"kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.193873 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.196600 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.196799 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.197090 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.197164 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/026bbe7a-aec9-40ee-9be3-cdb35054e076-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.197250 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kszgd\" (UniqueName: \"kubernetes.io/projected/026bbe7a-aec9-40ee-9be3-cdb35054e076-kube-api-access-kszgd\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.212171 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2tnbk" event={"ID":"026bbe7a-aec9-40ee-9be3-cdb35054e076","Type":"ContainerDied","Data":"3095e5ccc72506309a2adf6e3ceae18778d2c5f94dff5364525934f800f177aa"} Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.212212 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2tnbk" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.212236 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3095e5ccc72506309a2adf6e3ceae18778d2c5f94dff5364525934f800f177aa" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.228968 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-n52pp" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.300395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.300614 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.301162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.301360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.301493 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdbz\" (UniqueName: \"kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.301859 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.313259 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.313618 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.315419 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.318057 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.318238 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.321511 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdbz\" (UniqueName: \"kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz\") pod \"keystone-bootstrap-dw7nl\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: I0202 13:23:49.575903 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.719048 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.719369 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-777ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cgqfl_openstack(47a4176b-5f58-47a9-a614-e5d05526da18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:23:49 crc kubenswrapper[4721]: E0202 13:23:49.720658 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cgqfl" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" Feb 02 13:23:50 crc kubenswrapper[4721]: E0202 13:23:50.247181 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cgqfl" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.295900 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.365792 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.367559 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.398129 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438676 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438718 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438796 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcdcl\" (UniqueName: \"kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.438860 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.522317 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af567124-fd0c-420e-b79b-41e8a7140cef" path="/var/lib/kubelet/pods/af567124-fd0c-420e-b79b-41e8a7140cef/volumes" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.523044 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.536632 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.536721 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.539201 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540725 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540809 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcdcl\" (UniqueName: \"kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540838 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540877 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.540918 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.541911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.542903 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.543672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.543703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.544095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.544639 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4l6jw" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.544803 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.544984 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.611488 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcdcl\" (UniqueName: \"kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl\") pod \"dnsmasq-dns-55f844cf75-mnq9d\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.650107 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.650184 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.650229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw65p\" (UniqueName: \"kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.650286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.650322 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.752718 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw65p\" (UniqueName: \"kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.752812 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.752854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.753095 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.753152 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.757909 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.758010 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.758791 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.758807 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.760316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.777418 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw65p\" (UniqueName: \"kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p\") pod \"neutron-7556fd87fb-z78lc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:50 crc kubenswrapper[4721]: I0202 13:23:50.855349 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:51 crc kubenswrapper[4721]: E0202 13:23:51.749146 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 13:23:51 crc kubenswrapper[4721]: E0202 13:23:51.749392 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4b6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7wjxh_openstack(ad3578ef-5d1b-4c52-939c-237feadc1c5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 13:23:51 crc kubenswrapper[4721]: E0202 13:23:51.750592 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7wjxh" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.244456 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.310195 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.311515 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" event={"ID":"42af4b6d-a3ac-4a90-8338-71dcdba65713","Type":"ContainerDied","Data":"456048e529dbcb77018460d463790172fa02060ea250e5dff1aa993f12b0810c"} Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.311592 4721 scope.go:117] "RemoveContainer" containerID="bc03b54bbb02a10feb6c590756e051033c53cca1bb5f2262c3ad8498b1f79580" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.315573 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64pt\" (UniqueName: \"kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.315851 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.317186 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.317246 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.317276 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.317344 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb\") pod \"42af4b6d-a3ac-4a90-8338-71dcdba65713\" (UID: \"42af4b6d-a3ac-4a90-8338-71dcdba65713\") " Feb 02 13:23:52 crc kubenswrapper[4721]: E0202 13:23:52.317979 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7wjxh" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.354674 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt" (OuterVolumeSpecName: "kube-api-access-t64pt") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "kube-api-access-t64pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.366666 4721 scope.go:117] "RemoveContainer" containerID="5e753861212a75a6fc0b9472a29c53a493e744b8cbd4bb8e6b1dda52762f0e28" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.426361 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64pt\" (UniqueName: \"kubernetes.io/projected/42af4b6d-a3ac-4a90-8338-71dcdba65713-kube-api-access-t64pt\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.431084 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.454778 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.458506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.461953 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config" (OuterVolumeSpecName: "config") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.462880 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42af4b6d-a3ac-4a90-8338-71dcdba65713" (UID: "42af4b6d-a3ac-4a90-8338-71dcdba65713"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.528882 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.528920 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.528931 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.528947 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.528959 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42af4b6d-a3ac-4a90-8338-71dcdba65713-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.663106 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.679201 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-f8lxs"] Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.877254 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.897404 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:23:52 crc kubenswrapper[4721]: E0202 13:23:52.897934 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.897950 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" Feb 02 13:23:52 crc kubenswrapper[4721]: E0202 13:23:52.898005 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="init" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.898015 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="init" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.898311 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.904446 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.910021 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.922258 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.923222 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.932399 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dw7nl"] Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944292 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944660 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxkh\" (UniqueName: \"kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944740 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944807 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944876 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944952 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.944985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:52 crc kubenswrapper[4721]: I0202 13:23:52.999060 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.047826 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxkh\" (UniqueName: \"kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.047910 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.047964 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.048859 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.048923 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.048968 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.049160 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.056790 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.062884 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.065543 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.065644 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.066148 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.073305 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.079775 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.090273 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxkh\" (UniqueName: \"kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh\") pod \"neutron-59d9f7977f-7dt9k\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: W0202 13:23:53.126688 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b323e62_7a54_4935_8e47_2df809ecb2f9.slice/crio-2496bbb6bf1b14291e279777fda509f8bd82ead08412e32b9ec7b5156bb292a2 WatchSource:0}: Error finding container 2496bbb6bf1b14291e279777fda509f8bd82ead08412e32b9ec7b5156bb292a2: Status 404 returned error can't find the container with id 2496bbb6bf1b14291e279777fda509f8bd82ead08412e32b9ec7b5156bb292a2 Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.231867 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.321790 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86z2v" event={"ID":"bdd67c16-7130-4095-952f-006aa5bcd5bb","Type":"ContainerStarted","Data":"e7d18b6bc119712cab584a04197b6912f210c5505b0b18fd032278cec2f8f3b5"} Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.325492 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerStarted","Data":"a421eaea63c1491fb34aa76f2530e7a4cae6ab0162bb28e14e6199cf5d6daf85"} Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.349737 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-86z2v" podStartSLOduration=3.956863726 podStartE2EDuration="34.349715038s" podCreationTimestamp="2026-02-02 13:23:19 +0000 UTC" firstStartedPulling="2026-02-02 13:23:21.315675566 +0000 UTC m=+1341.618189955" lastFinishedPulling="2026-02-02 13:23:51.708526878 +0000 UTC m=+1372.011041267" observedRunningTime="2026-02-02 13:23:53.342525123 +0000 UTC m=+1373.645039512" watchObservedRunningTime="2026-02-02 13:23:53.349715038 +0000 UTC m=+1373.652229447" Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.354122 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" event={"ID":"4b323e62-7a54-4935-8e47-2df809ecb2f9","Type":"ContainerStarted","Data":"2496bbb6bf1b14291e279777fda509f8bd82ead08412e32b9ec7b5156bb292a2"} Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.360541 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" event={"ID":"a879c878-fff2-4aa4-b08a-67d13027b95e","Type":"ContainerStarted","Data":"e8f80343ecb3d990b67219a5c3260755b6d6ddca2e0665007891b6a3ff94aed6"} Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.361802 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dw7nl" event={"ID":"d168e414-ab7e-45ad-b142-25dcc1c359b0","Type":"ContainerStarted","Data":"b5e597a9c6809a0c74e15c50feaa328c23d1128cb144bc36612339923f03dd73"} Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.744163 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:23:53 crc kubenswrapper[4721]: I0202 13:23:53.804782 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.234815 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:23:54 crc kubenswrapper[4721]: W0202 13:23:54.248864 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92544741_12fa_42ac_ba5b_67179ec9443b.slice/crio-f6156b432d02a8811d9c46ced7a52980aa20788c34415c25f1c59b54dac366c9 WatchSource:0}: Error finding container f6156b432d02a8811d9c46ced7a52980aa20788c34415c25f1c59b54dac366c9: Status 404 returned error can't find the container with id f6156b432d02a8811d9c46ced7a52980aa20788c34415c25f1c59b54dac366c9 Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.379644 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerStarted","Data":"4badbbeb19bd1ce4dabd3dbcb1949a66c97f449e7bcb0e940791d3782f7337b1"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.380017 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerStarted","Data":"8cc4a5e49bcdc1259392f527ba7a63bedab94aac24105814e1cdaa17c7280e6e"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.382045 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerStarted","Data":"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.386630 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerID="ece0ac3461ccc45508567010797bf0f995740c80a06e3f1018effd1949e58be1" exitCode=0 Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.386760 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" event={"ID":"4b323e62-7a54-4935-8e47-2df809ecb2f9","Type":"ContainerDied","Data":"ece0ac3461ccc45508567010797bf0f995740c80a06e3f1018effd1949e58be1"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.398772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerStarted","Data":"f6156b432d02a8811d9c46ced7a52980aa20788c34415c25f1c59b54dac366c9"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.423294 4721 generic.go:334] "Generic (PLEG): container finished" podID="a879c878-fff2-4aa4-b08a-67d13027b95e" containerID="474f0b3dd5fc3d829759613af04141efbfcae7faec967567689e0cac24ad4e8d" exitCode=0 Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.444684 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" path="/var/lib/kubelet/pods/42af4b6d-a3ac-4a90-8338-71dcdba65713/volumes" Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.445329 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerStarted","Data":"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.445366 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" event={"ID":"a879c878-fff2-4aa4-b08a-67d13027b95e","Type":"ContainerDied","Data":"474f0b3dd5fc3d829759613af04141efbfcae7faec967567689e0cac24ad4e8d"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.445392 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerStarted","Data":"de0fa32e0591d0c591d4364744d36cc03bd323f3bbad2d10abfa57cf3278568e"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.453388 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dw7nl" event={"ID":"d168e414-ab7e-45ad-b142-25dcc1c359b0","Type":"ContainerStarted","Data":"3705d645077158cd12edf8f0f9b5a39f0ba95d5854f57c056a964be7f2bc24c9"} Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.489353 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dw7nl" podStartSLOduration=5.489330525 podStartE2EDuration="5.489330525s" podCreationTimestamp="2026-02-02 13:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:54.476730313 +0000 UTC m=+1374.779244712" watchObservedRunningTime="2026-02-02 13:23:54.489330525 +0000 UTC m=+1374.791844924" Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.630868 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-f8lxs" podUID="42af4b6d-a3ac-4a90-8338-71dcdba65713" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Feb 02 13:23:54 crc kubenswrapper[4721]: I0202 13:23:54.978039 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039580 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039674 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sws9d\" (UniqueName: \"kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039728 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039811 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.039935 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config\") pod \"a879c878-fff2-4aa4-b08a-67d13027b95e\" (UID: \"a879c878-fff2-4aa4-b08a-67d13027b95e\") " Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.076255 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d" (OuterVolumeSpecName: "kube-api-access-sws9d") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "kube-api-access-sws9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.111719 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.120630 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.125788 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.145735 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.145766 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.145777 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.145786 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sws9d\" (UniqueName: \"kubernetes.io/projected/a879c878-fff2-4aa4-b08a-67d13027b95e-kube-api-access-sws9d\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.145971 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.151787 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config" (OuterVolumeSpecName: "config") pod "a879c878-fff2-4aa4-b08a-67d13027b95e" (UID: "a879c878-fff2-4aa4-b08a-67d13027b95e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.252445 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.252479 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a879c878-fff2-4aa4-b08a-67d13027b95e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:55 crc kubenswrapper[4721]: E0202 13:23:55.454719 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd67c16_7130_4095_952f_006aa5bcd5bb.slice/crio-conmon-e7d18b6bc119712cab584a04197b6912f210c5505b0b18fd032278cec2f8f3b5.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.474299 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerStarted","Data":"8b6cdf8693a8045891752ca1818e301a94afd2ff81089e5e3f78d2e62be53c12"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.474340 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerStarted","Data":"da489fd3d8eceb37f5d84a7cb93a4298ee1e4fc025f5a8e61c61624f47e5cc8a"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.475245 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.478358 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerStarted","Data":"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.478463 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-log" containerID="cri-o://4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" gracePeriod=30 Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.478484 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-httpd" containerID="cri-o://b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" gracePeriod=30 Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.482263 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" event={"ID":"a879c878-fff2-4aa4-b08a-67d13027b95e","Type":"ContainerDied","Data":"e8f80343ecb3d990b67219a5c3260755b6d6ddca2e0665007891b6a3ff94aed6"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.482289 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-f9ml5" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.482313 4721 scope.go:117] "RemoveContainer" containerID="474f0b3dd5fc3d829759613af04141efbfcae7faec967567689e0cac24ad4e8d" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.485473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerStarted","Data":"9b2f14ad59791ba7c4b7509a93d19935150acbbd21de442df3a99c78406b90d5"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.488333 4721 generic.go:334] "Generic (PLEG): container finished" podID="bdd67c16-7130-4095-952f-006aa5bcd5bb" containerID="e7d18b6bc119712cab584a04197b6912f210c5505b0b18fd032278cec2f8f3b5" exitCode=0 Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.488396 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86z2v" event={"ID":"bdd67c16-7130-4095-952f-006aa5bcd5bb","Type":"ContainerDied","Data":"e7d18b6bc119712cab584a04197b6912f210c5505b0b18fd032278cec2f8f3b5"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.506096 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59d9f7977f-7dt9k" podStartSLOduration=3.506040219 podStartE2EDuration="3.506040219s" podCreationTimestamp="2026-02-02 13:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:55.495207525 +0000 UTC m=+1375.797721914" watchObservedRunningTime="2026-02-02 13:23:55.506040219 +0000 UTC m=+1375.808554618" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.536880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerStarted","Data":"8f88f40e370779b33eeb9ef5e263779063066216a67b07cb99c3e5b293638a3d"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.537109 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.548525 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" event={"ID":"4b323e62-7a54-4935-8e47-2df809ecb2f9","Type":"ContainerStarted","Data":"38beb737c04426dba147dffdfeaee7f338c6927e4b1cdcf880429b179ca9988b"} Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.549214 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.558763 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.558717518 podStartE2EDuration="26.558717518s" podCreationTimestamp="2026-02-02 13:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:55.545041507 +0000 UTC m=+1375.847555896" watchObservedRunningTime="2026-02-02 13:23:55.558717518 +0000 UTC m=+1375.861231917" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.619370 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.630095 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-f9ml5"] Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.636649 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7556fd87fb-z78lc" podStartSLOduration=5.636632371 podStartE2EDuration="5.636632371s" podCreationTimestamp="2026-02-02 13:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:55.601266492 +0000 UTC m=+1375.903780881" watchObservedRunningTime="2026-02-02 13:23:55.636632371 +0000 UTC m=+1375.939146760" Feb 02 13:23:55 crc kubenswrapper[4721]: I0202 13:23:55.649878 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" podStartSLOduration=5.64985898 podStartE2EDuration="5.64985898s" podCreationTimestamp="2026-02-02 13:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:55.630767552 +0000 UTC m=+1375.933281941" watchObservedRunningTime="2026-02-02 13:23:55.64985898 +0000 UTC m=+1375.952373369" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.255988 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.396671 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.396749 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw8rf\" (UniqueName: \"kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.396788 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.396923 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.396956 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.397005 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.397100 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data\") pod \"231734f6-4050-4aff-92a2-a92982428b95\" (UID: \"231734f6-4050-4aff-92a2-a92982428b95\") " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.397470 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs" (OuterVolumeSpecName: "logs") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.397756 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.397988 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.398000 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/231734f6-4050-4aff-92a2-a92982428b95-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.407878 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf" (OuterVolumeSpecName: "kube-api-access-tw8rf") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "kube-api-access-tw8rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.411814 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts" (OuterVolumeSpecName: "scripts") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.428046 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a879c878-fff2-4aa4-b08a-67d13027b95e" path="/var/lib/kubelet/pods/a879c878-fff2-4aa4-b08a-67d13027b95e/volumes" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.428523 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663" (OuterVolumeSpecName: "glance") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.467236 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.483268 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data" (OuterVolumeSpecName: "config-data") pod "231734f6-4050-4aff-92a2-a92982428b95" (UID: "231734f6-4050-4aff-92a2-a92982428b95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.500081 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw8rf\" (UniqueName: \"kubernetes.io/projected/231734f6-4050-4aff-92a2-a92982428b95-kube-api-access-tw8rf\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.500424 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.500516 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.500599 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231734f6-4050-4aff-92a2-a92982428b95-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.500697 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") on node \"crc\" " Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.531558 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.532099 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663") on node "crc" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.595793 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerStarted","Data":"6863179c724438e853ded7b834d2693a1bdec25fbe0b4d4e3ec7a6bb5320d8bb"} Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.595976 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-log" containerID="cri-o://9b2f14ad59791ba7c4b7509a93d19935150acbbd21de442df3a99c78406b90d5" gracePeriod=30 Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.597138 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-httpd" containerID="cri-o://6863179c724438e853ded7b834d2693a1bdec25fbe0b4d4e3ec7a6bb5320d8bb" gracePeriod=30 Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.606158 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.616838 4721 generic.go:334] "Generic (PLEG): container finished" podID="231734f6-4050-4aff-92a2-a92982428b95" containerID="b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" exitCode=143 Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.616878 4721 generic.go:334] "Generic (PLEG): container finished" podID="231734f6-4050-4aff-92a2-a92982428b95" containerID="4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" exitCode=143 Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.619509 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerDied","Data":"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329"} Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.619571 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerDied","Data":"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2"} Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.619584 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"231734f6-4050-4aff-92a2-a92982428b95","Type":"ContainerDied","Data":"a421eaea63c1491fb34aa76f2530e7a4cae6ab0162bb28e14e6199cf5d6daf85"} Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.619602 4721 scope.go:117] "RemoveContainer" containerID="b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.619857 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.633004 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.632978523 podStartE2EDuration="27.632978523s" podCreationTimestamp="2026-02-02 13:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:23:56.619478236 +0000 UTC m=+1376.921992625" watchObservedRunningTime="2026-02-02 13:23:56.632978523 +0000 UTC m=+1376.935492912" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.682793 4721 scope.go:117] "RemoveContainer" containerID="4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.682975 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.705137 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.735250 4721 scope.go:117] "RemoveContainer" containerID="b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.735364 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:56 crc kubenswrapper[4721]: E0202 13:23:56.735899 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-log" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.735911 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-log" Feb 02 13:23:56 crc kubenswrapper[4721]: E0202 13:23:56.735922 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a879c878-fff2-4aa4-b08a-67d13027b95e" containerName="init" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.735927 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a879c878-fff2-4aa4-b08a-67d13027b95e" containerName="init" Feb 02 13:23:56 crc kubenswrapper[4721]: E0202 13:23:56.735945 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-httpd" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.735950 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-httpd" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.736150 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a879c878-fff2-4aa4-b08a-67d13027b95e" containerName="init" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.736167 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-httpd" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.736184 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="231734f6-4050-4aff-92a2-a92982428b95" containerName="glance-log" Feb 02 13:23:56 crc kubenswrapper[4721]: E0202 13:23:56.736513 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329\": container with ID starting with b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329 not found: ID does not exist" containerID="b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.737388 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.737430 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329"} err="failed to get container status \"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329\": rpc error: code = NotFound desc = could not find container \"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329\": container with ID starting with b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329 not found: ID does not exist" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.738730 4721 scope.go:117] "RemoveContainer" containerID="4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.739968 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 13:23:56 crc kubenswrapper[4721]: E0202 13:23:56.740186 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2\": container with ID starting with 4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2 not found: ID does not exist" containerID="4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.740213 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2"} err="failed to get container status \"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2\": rpc error: code = NotFound desc = could not find container \"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2\": container with ID starting with 4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2 not found: ID does not exist" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.740237 4721 scope.go:117] "RemoveContainer" containerID="b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.740377 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.740715 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329"} err="failed to get container status \"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329\": rpc error: code = NotFound desc = could not find container \"b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329\": container with ID starting with b3242cfa6b241cc91e98f1adae423b7d5225bd04f979b2258cca2ccaec320329 not found: ID does not exist" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.740736 4721 scope.go:117] "RemoveContainer" containerID="4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.741055 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2"} err="failed to get container status \"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2\": rpc error: code = NotFound desc = could not find container \"4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2\": container with ID starting with 4c64a38f5abe2c61595868db1d01effc7845b6eae80975a67d802574bacd5fa2 not found: ID does not exist" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.760707 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814448 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814513 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814562 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814609 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqt2\" (UniqueName: \"kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814867 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814937 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.814967 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.918648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.918726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.918828 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.918916 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.918988 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqt2\" (UniqueName: \"kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.919089 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.919142 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.919168 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.919914 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.920515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.930798 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.930872 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.930920 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c7812605d9919b226d4340fce797cd8fb18c9c948d1e68864aa7eb7aeecf4816/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.937808 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.950227 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.950768 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:56 crc kubenswrapper[4721]: I0202 13:23:56.959130 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqt2\" (UniqueName: \"kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.057778 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " pod="openstack/glance-default-external-api-0" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.298003 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.320672 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.429190 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle\") pod \"bdd67c16-7130-4095-952f-006aa5bcd5bb\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.429314 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data\") pod \"bdd67c16-7130-4095-952f-006aa5bcd5bb\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.429454 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts\") pod \"bdd67c16-7130-4095-952f-006aa5bcd5bb\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.429514 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc8kv\" (UniqueName: \"kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv\") pod \"bdd67c16-7130-4095-952f-006aa5bcd5bb\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.429547 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs\") pod \"bdd67c16-7130-4095-952f-006aa5bcd5bb\" (UID: \"bdd67c16-7130-4095-952f-006aa5bcd5bb\") " Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.431245 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs" (OuterVolumeSpecName: "logs") pod "bdd67c16-7130-4095-952f-006aa5bcd5bb" (UID: "bdd67c16-7130-4095-952f-006aa5bcd5bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.444233 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts" (OuterVolumeSpecName: "scripts") pod "bdd67c16-7130-4095-952f-006aa5bcd5bb" (UID: "bdd67c16-7130-4095-952f-006aa5bcd5bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.445760 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv" (OuterVolumeSpecName: "kube-api-access-fc8kv") pod "bdd67c16-7130-4095-952f-006aa5bcd5bb" (UID: "bdd67c16-7130-4095-952f-006aa5bcd5bb"). InnerVolumeSpecName "kube-api-access-fc8kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.486503 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdd67c16-7130-4095-952f-006aa5bcd5bb" (UID: "bdd67c16-7130-4095-952f-006aa5bcd5bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.501297 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data" (OuterVolumeSpecName: "config-data") pod "bdd67c16-7130-4095-952f-006aa5bcd5bb" (UID: "bdd67c16-7130-4095-952f-006aa5bcd5bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.536145 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc8kv\" (UniqueName: \"kubernetes.io/projected/bdd67c16-7130-4095-952f-006aa5bcd5bb-kube-api-access-fc8kv\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.536185 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd67c16-7130-4095-952f-006aa5bcd5bb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.536198 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.536209 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.536219 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd67c16-7130-4095-952f-006aa5bcd5bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.664050 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-86z2v" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.664108 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-86z2v" event={"ID":"bdd67c16-7130-4095-952f-006aa5bcd5bb","Type":"ContainerDied","Data":"42fcaa03785c97d077f0697f3384b2a0825c5d2c67e774961ea3ccb33c705ea9"} Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.676277 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42fcaa03785c97d077f0697f3384b2a0825c5d2c67e774961ea3ccb33c705ea9" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.691355 4721 generic.go:334] "Generic (PLEG): container finished" podID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerID="6863179c724438e853ded7b834d2693a1bdec25fbe0b4d4e3ec7a6bb5320d8bb" exitCode=0 Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.691396 4721 generic.go:334] "Generic (PLEG): container finished" podID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerID="9b2f14ad59791ba7c4b7509a93d19935150acbbd21de442df3a99c78406b90d5" exitCode=143 Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.691422 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerDied","Data":"6863179c724438e853ded7b834d2693a1bdec25fbe0b4d4e3ec7a6bb5320d8bb"} Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.691456 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerDied","Data":"9b2f14ad59791ba7c4b7509a93d19935150acbbd21de442df3a99c78406b90d5"} Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.732244 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:23:57 crc kubenswrapper[4721]: E0202 13:23:57.732894 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd67c16-7130-4095-952f-006aa5bcd5bb" containerName="placement-db-sync" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.732921 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd67c16-7130-4095-952f-006aa5bcd5bb" containerName="placement-db-sync" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.733227 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd67c16-7130-4095-952f-006aa5bcd5bb" containerName="placement-db-sync" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.739279 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.753824 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.754061 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.754231 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tfr2p" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.754242 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.754366 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.788989 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848328 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848372 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848415 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbhh\" (UniqueName: \"kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848607 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848837 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.848919 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.950692 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbhh\" (UniqueName: \"kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951150 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951286 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951325 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.951584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.957495 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.958155 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.959384 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.960041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.963218 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:57 crc kubenswrapper[4721]: I0202 13:23:57.970880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbhh\" (UniqueName: \"kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh\") pod \"placement-75b75c495b-kpsxz\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.054545 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.148719 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.158963 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159045 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgt6x\" (UniqueName: \"kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159178 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159404 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.159566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run\") pod \"2651c902-94b1-4da3-b03f-cd5aee83749b\" (UID: \"2651c902-94b1-4da3-b03f-cd5aee83749b\") " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.181672 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs" (OuterVolumeSpecName: "logs") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.191480 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.202813 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x" (OuterVolumeSpecName: "kube-api-access-jgt6x") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "kube-api-access-jgt6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.202908 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts" (OuterVolumeSpecName: "scripts") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.222271 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.249290 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79" (OuterVolumeSpecName: "glance") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "pvc-56daf3c3-162c-4970-aab6-c4cecea22e79". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263197 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgt6x\" (UniqueName: \"kubernetes.io/projected/2651c902-94b1-4da3-b03f-cd5aee83749b-kube-api-access-jgt6x\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263227 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263255 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") on node \"crc\" " Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263265 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263275 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.263286 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2651c902-94b1-4da3-b03f-cd5aee83749b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.280218 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:23:58 crc kubenswrapper[4721]: W0202 13:23:58.286978 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cdd3f19_3e66_4807_a0e8_957c713cef36.slice/crio-57b27fa9d3d382916d0fc10e2c5127febea2102b5b13a8f86a12bdcd66a461e9 WatchSource:0}: Error finding container 57b27fa9d3d382916d0fc10e2c5127febea2102b5b13a8f86a12bdcd66a461e9: Status 404 returned error can't find the container with id 57b27fa9d3d382916d0fc10e2c5127febea2102b5b13a8f86a12bdcd66a461e9 Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.301234 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data" (OuterVolumeSpecName: "config-data") pod "2651c902-94b1-4da3-b03f-cd5aee83749b" (UID: "2651c902-94b1-4da3-b03f-cd5aee83749b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.303271 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.303521 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-56daf3c3-162c-4970-aab6-c4cecea22e79" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79") on node "crc" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.366874 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2651c902-94b1-4da3-b03f-cd5aee83749b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.367288 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") on node \"crc\" DevicePath \"\"" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.453392 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231734f6-4050-4aff-92a2-a92982428b95" path="/var/lib/kubelet/pods/231734f6-4050-4aff-92a2-a92982428b95/volumes" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.706719 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerStarted","Data":"57b27fa9d3d382916d0fc10e2c5127febea2102b5b13a8f86a12bdcd66a461e9"} Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.713007 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2651c902-94b1-4da3-b03f-cd5aee83749b","Type":"ContainerDied","Data":"de0fa32e0591d0c591d4364744d36cc03bd323f3bbad2d10abfa57cf3278568e"} Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.713091 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.713113 4721 scope.go:117] "RemoveContainer" containerID="6863179c724438e853ded7b834d2693a1bdec25fbe0b4d4e3ec7a6bb5320d8bb" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.748036 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.943453 4721 scope.go:117] "RemoveContainer" containerID="9b2f14ad59791ba7c4b7509a93d19935150acbbd21de442df3a99c78406b90d5" Feb 02 13:23:58 crc kubenswrapper[4721]: I0202 13:23:58.989437 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.027159 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.038143 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:59 crc kubenswrapper[4721]: E0202 13:23:59.038814 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-log" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.038834 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-log" Feb 02 13:23:59 crc kubenswrapper[4721]: E0202 13:23:59.038899 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-httpd" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.038910 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-httpd" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.039198 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-httpd" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.039222 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" containerName="glance-log" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.040783 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.043886 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.044707 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.056825 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.092958 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6x5k\" (UniqueName: \"kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093084 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093183 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093510 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093563 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093644 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093688 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.093929 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.195905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.195956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.195996 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.196021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.196119 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.196168 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6x5k\" (UniqueName: \"kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.196200 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.196253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.197464 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.198216 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.202610 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.202690 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f813ebfbde533117d7c5539927c335132efd30cff3e1cb355d78cb9d4c1a927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.204248 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.205524 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.206419 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.221813 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.233957 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6x5k\" (UniqueName: \"kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.255605 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.389912 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.741448 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerStarted","Data":"60d5f5c74197287dc48184ed76b8f6d34b4d23a85669a6b5c075668ede91ce31"} Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.749841 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerStarted","Data":"be87e0c26b43f57fe8bd6e999c716fddff3bcf7e4eec9b29a39b86ceafba0594"} Feb 02 13:23:59 crc kubenswrapper[4721]: I0202 13:23:59.749894 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerStarted","Data":"bef5ef7eca11516e2b9cce9579ac419ba493f62c89b699687290738232336cce"} Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.423524 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2651c902-94b1-4da3-b03f-cd5aee83749b" path="/var/lib/kubelet/pods/2651c902-94b1-4da3-b03f-cd5aee83749b/volumes" Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.760919 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.830808 4721 generic.go:334] "Generic (PLEG): container finished" podID="d168e414-ab7e-45ad-b142-25dcc1c359b0" containerID="3705d645077158cd12edf8f0f9b5a39f0ba95d5854f57c056a964be7f2bc24c9" exitCode=0 Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.830908 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dw7nl" event={"ID":"d168e414-ab7e-45ad-b142-25dcc1c359b0","Type":"ContainerDied","Data":"3705d645077158cd12edf8f0f9b5a39f0ba95d5854f57c056a964be7f2bc24c9"} Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.905487 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:24:00 crc kubenswrapper[4721]: I0202 13:24:00.905748 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-x4f5m" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="dnsmasq-dns" containerID="cri-o://76fa42f68bb28a61e9cdddef88da78612bb32973f43a836c40f835e5ed6d0856" gracePeriod=10 Feb 02 13:24:01 crc kubenswrapper[4721]: I0202 13:24:01.843772 4721 generic.go:334] "Generic (PLEG): container finished" podID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerID="76fa42f68bb28a61e9cdddef88da78612bb32973f43a836c40f835e5ed6d0856" exitCode=0 Feb 02 13:24:01 crc kubenswrapper[4721]: I0202 13:24:01.844324 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4f5m" event={"ID":"a12cebe8-c719-4841-8d01-e9faf9b745cf","Type":"ContainerDied","Data":"76fa42f68bb28a61e9cdddef88da78612bb32973f43a836c40f835e5ed6d0856"} Feb 02 13:24:01 crc kubenswrapper[4721]: I0202 13:24:01.894824 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-x4f5m" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.444706 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.485025 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612109 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzdbz\" (UniqueName: \"kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612240 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb\") pod \"a12cebe8-c719-4841-8d01-e9faf9b745cf\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612319 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc\") pod \"a12cebe8-c719-4841-8d01-e9faf9b745cf\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612364 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config\") pod \"a12cebe8-c719-4841-8d01-e9faf9b745cf\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612383 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hmg\" (UniqueName: \"kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg\") pod \"a12cebe8-c719-4841-8d01-e9faf9b745cf\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612441 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612464 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612511 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612558 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612640 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle\") pod \"d168e414-ab7e-45ad-b142-25dcc1c359b0\" (UID: \"d168e414-ab7e-45ad-b142-25dcc1c359b0\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.612723 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb\") pod \"a12cebe8-c719-4841-8d01-e9faf9b745cf\" (UID: \"a12cebe8-c719-4841-8d01-e9faf9b745cf\") " Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.638557 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz" (OuterVolumeSpecName: "kube-api-access-bzdbz") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "kube-api-access-bzdbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.651007 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg" (OuterVolumeSpecName: "kube-api-access-h2hmg") pod "a12cebe8-c719-4841-8d01-e9faf9b745cf" (UID: "a12cebe8-c719-4841-8d01-e9faf9b745cf"). InnerVolumeSpecName "kube-api-access-h2hmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.674052 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts" (OuterVolumeSpecName: "scripts") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.675402 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:24:03 crc kubenswrapper[4721]: E0202 13:24:03.677030 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="dnsmasq-dns" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.677052 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="dnsmasq-dns" Feb 02 13:24:03 crc kubenswrapper[4721]: E0202 13:24:03.677081 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d168e414-ab7e-45ad-b142-25dcc1c359b0" containerName="keystone-bootstrap" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.677089 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d168e414-ab7e-45ad-b142-25dcc1c359b0" containerName="keystone-bootstrap" Feb 02 13:24:03 crc kubenswrapper[4721]: E0202 13:24:03.677138 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="init" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.677152 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="init" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.677408 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d168e414-ab7e-45ad-b142-25dcc1c359b0" containerName="keystone-bootstrap" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.677429 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" containerName="dnsmasq-dns" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.678376 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.681651 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.694274 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.717288 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.722900 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzdbz\" (UniqueName: \"kubernetes.io/projected/d168e414-ab7e-45ad-b142-25dcc1c359b0-kube-api-access-bzdbz\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.723054 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2hmg\" (UniqueName: \"kubernetes.io/projected/a12cebe8-c719-4841-8d01-e9faf9b745cf-kube-api-access-h2hmg\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.723252 4721 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.723347 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.723407 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.826508 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwvt\" (UniqueName: \"kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.827286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.827500 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.891768 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4f5m" event={"ID":"a12cebe8-c719-4841-8d01-e9faf9b745cf","Type":"ContainerDied","Data":"521bca06923fe78f1ba71782b798785e3b87c44a41b5a65d319554abd4047afe"} Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.891828 4721 scope.go:117] "RemoveContainer" containerID="76fa42f68bb28a61e9cdddef88da78612bb32973f43a836c40f835e5ed6d0856" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.891975 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4f5m" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.908221 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dw7nl" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.909017 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dw7nl" event={"ID":"d168e414-ab7e-45ad-b142-25dcc1c359b0","Type":"ContainerDied","Data":"b5e597a9c6809a0c74e15c50feaa328c23d1128cb144bc36612339923f03dd73"} Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.909049 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e597a9c6809a0c74e15c50feaa328c23d1128cb144bc36612339923f03dd73" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.926166 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerStarted","Data":"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d"} Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.931457 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwvt\" (UniqueName: \"kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.931630 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.931681 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.934053 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.936471 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.954537 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data" (OuterVolumeSpecName: "config-data") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.967952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwvt\" (UniqueName: \"kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt\") pod \"redhat-operators-mlbxn\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.972349 4721 scope.go:117] "RemoveContainer" containerID="b3aeeebb46496223c552a8ed7c33309ec906c0d30db8ba232bc642182832692e" Feb 02 13:24:03 crc kubenswrapper[4721]: I0202 13:24:03.984545 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a12cebe8-c719-4841-8d01-e9faf9b745cf" (UID: "a12cebe8-c719-4841-8d01-e9faf9b745cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.022809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.024531 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a12cebe8-c719-4841-8d01-e9faf9b745cf" (UID: "a12cebe8-c719-4841-8d01-e9faf9b745cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.043426 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.044477 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.044509 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.081312 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.148971 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a12cebe8-c719-4841-8d01-e9faf9b745cf" (UID: "a12cebe8-c719-4841-8d01-e9faf9b745cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.167190 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d168e414-ab7e-45ad-b142-25dcc1c359b0" (UID: "d168e414-ab7e-45ad-b142-25dcc1c359b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.182736 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config" (OuterVolumeSpecName: "config") pod "a12cebe8-c719-4841-8d01-e9faf9b745cf" (UID: "a12cebe8-c719-4841-8d01-e9faf9b745cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.251362 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d168e414-ab7e-45ad-b142-25dcc1c359b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.251416 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.251425 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12cebe8-c719-4841-8d01-e9faf9b745cf-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.584950 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.610235 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4f5m"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.627446 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-784866f846-pjz9x"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.629068 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.635112 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.635920 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.636074 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.636260 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z9s7" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.636333 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.650752 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.651383 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-784866f846-pjz9x"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.756018 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pm8z\" (UniqueName: \"kubernetes.io/projected/5883cb27-6bc8-4309-aeac-64a54a46eb89-kube-api-access-5pm8z\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808563 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-scripts\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808621 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-internal-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-public-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808686 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-combined-ca-bundle\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808752 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-fernet-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808780 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-config-data\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.808831 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-credential-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.910826 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-internal-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.910891 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-public-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.910956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-combined-ca-bundle\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.911046 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-fernet-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.911113 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-config-data\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.911186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-credential-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.911269 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pm8z\" (UniqueName: \"kubernetes.io/projected/5883cb27-6bc8-4309-aeac-64a54a46eb89-kube-api-access-5pm8z\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.911302 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-scripts\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.929111 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-internal-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.939213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-scripts\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.966501 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-credential-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.966847 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-public-tls-certs\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.967730 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-fernet-keys\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.967909 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-combined-ca-bundle\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.970920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5883cb27-6bc8-4309-aeac-64a54a46eb89-config-data\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.971695 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pm8z\" (UniqueName: \"kubernetes.io/projected/5883cb27-6bc8-4309-aeac-64a54a46eb89-kube-api-access-5pm8z\") pod \"keystone-784866f846-pjz9x\" (UID: \"5883cb27-6bc8-4309-aeac-64a54a46eb89\") " pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:04 crc kubenswrapper[4721]: I0202 13:24:04.994222 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.153590 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cgqfl" event={"ID":"47a4176b-5f58-47a9-a614-e5d05526da18","Type":"ContainerStarted","Data":"2f2b028f4f0c88964c0238ef71f7a14ee0e0d63a6586667e0a8a76c80b585914"} Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.177775 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerStarted","Data":"9340d886174492f6ae0a15f3f3e7b662045af6f27123113fd5b1ff49e71bab73"} Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.179359 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.211309 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerStarted","Data":"527e8468434eea06358e3d4622c114662919b1ba98ef618fb71f16dfc7759e5a"} Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.227454 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cgqfl" podStartSLOduration=4.675115627 podStartE2EDuration="46.227435362s" podCreationTimestamp="2026-02-02 13:23:19 +0000 UTC" firstStartedPulling="2026-02-02 13:23:21.764751597 +0000 UTC m=+1342.067265986" lastFinishedPulling="2026-02-02 13:24:03.317071332 +0000 UTC m=+1383.619585721" observedRunningTime="2026-02-02 13:24:05.20265155 +0000 UTC m=+1385.505165949" watchObservedRunningTime="2026-02-02 13:24:05.227435362 +0000 UTC m=+1385.529949741" Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.265465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerStarted","Data":"901860dc4f2cb59ed85a54f8bc10b9859a36c07381edb1151f125b84138e4df8"} Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.275625 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75b75c495b-kpsxz" podStartSLOduration=8.275607139 podStartE2EDuration="8.275607139s" podCreationTimestamp="2026-02-02 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:05.261591919 +0000 UTC m=+1385.564106308" watchObservedRunningTime="2026-02-02 13:24:05.275607139 +0000 UTC m=+1385.578121528" Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.298219 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerStarted","Data":"e352ddf5546d205ff44a3674807cce6288e4d9d2631c1e97a5b81f79535b48d6"} Feb 02 13:24:05 crc kubenswrapper[4721]: I0202 13:24:05.928939 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-784866f846-pjz9x"] Feb 02 13:24:05 crc kubenswrapper[4721]: W0202 13:24:05.938550 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5883cb27_6bc8_4309_aeac_64a54a46eb89.slice/crio-3f1661e4f74176df80c70a1f77adafa36c5de27686ba51983a47e447156a65e1 WatchSource:0}: Error finding container 3f1661e4f74176df80c70a1f77adafa36c5de27686ba51983a47e447156a65e1: Status 404 returned error can't find the container with id 3f1661e4f74176df80c70a1f77adafa36c5de27686ba51983a47e447156a65e1 Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.092179 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.095751 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.114620 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.199525 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqx6q\" (UniqueName: \"kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.199634 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.199726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.305316 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqx6q\" (UniqueName: \"kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.305765 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.306543 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.307222 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.307634 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.325399 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqx6q\" (UniqueName: \"kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q\") pod \"community-operators-b2blk\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.329185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-784866f846-pjz9x" event={"ID":"5883cb27-6bc8-4309-aeac-64a54a46eb89","Type":"ContainerStarted","Data":"3f1661e4f74176df80c70a1f77adafa36c5de27686ba51983a47e447156a65e1"} Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.330035 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.423136 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12cebe8-c719-4841-8d01-e9faf9b745cf" path="/var/lib/kubelet/pods/a12cebe8-c719-4841-8d01-e9faf9b745cf/volumes" Feb 02 13:24:06 crc kubenswrapper[4721]: I0202 13:24:06.600187 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:07 crc kubenswrapper[4721]: W0202 13:24:07.261032 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb085bc7_03fe_45d5_8293_754aa8a47e79.slice/crio-ca306f9ea887ebe52384a1958a49ecea30993901251ae5049666a5cc6680fe83 WatchSource:0}: Error finding container ca306f9ea887ebe52384a1958a49ecea30993901251ae5049666a5cc6680fe83: Status 404 returned error can't find the container with id ca306f9ea887ebe52384a1958a49ecea30993901251ae5049666a5cc6680fe83 Feb 02 13:24:07 crc kubenswrapper[4721]: I0202 13:24:07.270135 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:07 crc kubenswrapper[4721]: I0202 13:24:07.348342 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:24:07 crc kubenswrapper[4721]: I0202 13:24:07.348282 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerStarted","Data":"ca306f9ea887ebe52384a1958a49ecea30993901251ae5049666a5cc6680fe83"} Feb 02 13:24:07 crc kubenswrapper[4721]: I0202 13:24:07.388887 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.388864553 podStartE2EDuration="11.388864553s" podCreationTimestamp="2026-02-02 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:07.372248492 +0000 UTC m=+1387.674762881" watchObservedRunningTime="2026-02-02 13:24:07.388864553 +0000 UTC m=+1387.691378942" Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.366289 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerStarted","Data":"6b2f4ace0eb75cc9899716f3900092d7c461a3838ae11f1c1f66ca5572586d0f"} Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.372869 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-784866f846-pjz9x" event={"ID":"5883cb27-6bc8-4309-aeac-64a54a46eb89","Type":"ContainerStarted","Data":"9286b365d0d48470dda4022122eaf45ff6454371eba06e55cacc127143ef2b2e"} Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.373228 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.384646 4721 generic.go:334] "Generic (PLEG): container finished" podID="37372b76-ef54-4a44-9b56-dea754373219" containerID="be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981" exitCode=0 Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.384939 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerDied","Data":"be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981"} Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.396821 4721 generic.go:334] "Generic (PLEG): container finished" podID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerID="5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396" exitCode=0 Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.396876 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerDied","Data":"5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396"} Feb 02 13:24:08 crc kubenswrapper[4721]: I0202 13:24:08.480278 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-784866f846-pjz9x" podStartSLOduration=4.480249412 podStartE2EDuration="4.480249412s" podCreationTimestamp="2026-02-02 13:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:08.403969223 +0000 UTC m=+1388.706483642" watchObservedRunningTime="2026-02-02 13:24:08.480249412 +0000 UTC m=+1388.782763821" Feb 02 13:24:09 crc kubenswrapper[4721]: I0202 13:24:09.434393 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerStarted","Data":"8d37382e602df793413e2180a90609111293c970770dc8adb79b0513cd945df9"} Feb 02 13:24:09 crc kubenswrapper[4721]: I0202 13:24:09.444142 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n52pp" event={"ID":"9fa244a8-7588-4d87-bd5b-cbcd10780c83","Type":"ContainerStarted","Data":"0b784cae8dddc21c2c3af89d409032d5d888340b53c786c1dc27d600d257dd2b"} Feb 02 13:24:09 crc kubenswrapper[4721]: I0202 13:24:09.466119 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.466092819 podStartE2EDuration="11.466092819s" podCreationTimestamp="2026-02-02 13:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:09.463378375 +0000 UTC m=+1389.765892794" watchObservedRunningTime="2026-02-02 13:24:09.466092819 +0000 UTC m=+1389.768607218" Feb 02 13:24:09 crc kubenswrapper[4721]: I0202 13:24:09.497762 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-n52pp" podStartSLOduration=5.066205161 podStartE2EDuration="51.497739517s" podCreationTimestamp="2026-02-02 13:23:18 +0000 UTC" firstStartedPulling="2026-02-02 13:23:21.669051589 +0000 UTC m=+1341.971565978" lastFinishedPulling="2026-02-02 13:24:08.100585945 +0000 UTC m=+1388.403100334" observedRunningTime="2026-02-02 13:24:09.491666663 +0000 UTC m=+1389.794181052" watchObservedRunningTime="2026-02-02 13:24:09.497739517 +0000 UTC m=+1389.800253906" Feb 02 13:24:09 crc kubenswrapper[4721]: I0202 13:24:09.976514 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:11 crc kubenswrapper[4721]: I0202 13:24:11.482334 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7wjxh" event={"ID":"ad3578ef-5d1b-4c52-939c-237feadc1c5c","Type":"ContainerStarted","Data":"ce2db44950c758448aaea5320ccdad1fe422fd10d5dc9377dff5887076136a7a"} Feb 02 13:24:11 crc kubenswrapper[4721]: I0202 13:24:11.506665 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7wjxh" podStartSLOduration=6.43815022 podStartE2EDuration="53.506646931s" podCreationTimestamp="2026-02-02 13:23:18 +0000 UTC" firstStartedPulling="2026-02-02 13:23:21.437748296 +0000 UTC m=+1341.740262695" lastFinishedPulling="2026-02-02 13:24:08.506245017 +0000 UTC m=+1388.808759406" observedRunningTime="2026-02-02 13:24:11.505797927 +0000 UTC m=+1391.808312316" watchObservedRunningTime="2026-02-02 13:24:11.506646931 +0000 UTC m=+1391.809161320" Feb 02 13:24:11 crc kubenswrapper[4721]: I0202 13:24:11.978702 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.263400 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6ccdcdf5fb-gncnr"] Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.265643 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.282955 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ccdcdf5fb-gncnr"] Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405127 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-public-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405202 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-scripts\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405277 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-internal-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405359 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-config-data\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405460 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-combined-ca-bundle\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405610 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwdb\" (UniqueName: \"kubernetes.io/projected/8e3f4574-6ad6-4b37-abf5-2005c8692a44-kube-api-access-jqwdb\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.405737 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3f4574-6ad6-4b37-abf5-2005c8692a44-logs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.495502 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerStarted","Data":"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a"} Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.498214 4721 generic.go:334] "Generic (PLEG): container finished" podID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerID="1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f" exitCode=0 Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.498269 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerDied","Data":"1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f"} Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3f4574-6ad6-4b37-abf5-2005c8692a44-logs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507590 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-public-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507627 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-scripts\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507703 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-internal-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507748 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-config-data\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-combined-ca-bundle\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.507930 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwdb\" (UniqueName: \"kubernetes.io/projected/8e3f4574-6ad6-4b37-abf5-2005c8692a44-kube-api-access-jqwdb\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.508139 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3f4574-6ad6-4b37-abf5-2005c8692a44-logs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.513857 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-public-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.604900 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-scripts\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.610691 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-internal-tls-certs\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.613825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-config-data\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.615752 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3f4574-6ad6-4b37-abf5-2005c8692a44-combined-ca-bundle\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.616614 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwdb\" (UniqueName: \"kubernetes.io/projected/8e3f4574-6ad6-4b37-abf5-2005c8692a44-kube-api-access-jqwdb\") pod \"placement-6ccdcdf5fb-gncnr\" (UID: \"8e3f4574-6ad6-4b37-abf5-2005c8692a44\") " pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:12 crc kubenswrapper[4721]: I0202 13:24:12.884103 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:15 crc kubenswrapper[4721]: I0202 13:24:15.591900 4721 generic.go:334] "Generic (PLEG): container finished" podID="37372b76-ef54-4a44-9b56-dea754373219" containerID="e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a" exitCode=0 Feb 02 13:24:15 crc kubenswrapper[4721]: I0202 13:24:15.592009 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerDied","Data":"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a"} Feb 02 13:24:16 crc kubenswrapper[4721]: I0202 13:24:16.606010 4721 generic.go:334] "Generic (PLEG): container finished" podID="47a4176b-5f58-47a9-a614-e5d05526da18" containerID="2f2b028f4f0c88964c0238ef71f7a14ee0e0d63a6586667e0a8a76c80b585914" exitCode=0 Feb 02 13:24:16 crc kubenswrapper[4721]: I0202 13:24:16.606098 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cgqfl" event={"ID":"47a4176b-5f58-47a9-a614-e5d05526da18","Type":"ContainerDied","Data":"2f2b028f4f0c88964c0238ef71f7a14ee0e0d63a6586667e0a8a76c80b585914"} Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.298951 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.299378 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.344436 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.364767 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.620489 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:24:17 crc kubenswrapper[4721]: I0202 13:24:17.620530 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.442948 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.483485 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data\") pod \"47a4176b-5f58-47a9-a614-e5d05526da18\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.483549 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777ht\" (UniqueName: \"kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht\") pod \"47a4176b-5f58-47a9-a614-e5d05526da18\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.483643 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle\") pod \"47a4176b-5f58-47a9-a614-e5d05526da18\" (UID: \"47a4176b-5f58-47a9-a614-e5d05526da18\") " Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.493964 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "47a4176b-5f58-47a9-a614-e5d05526da18" (UID: "47a4176b-5f58-47a9-a614-e5d05526da18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.501619 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht" (OuterVolumeSpecName: "kube-api-access-777ht") pod "47a4176b-5f58-47a9-a614-e5d05526da18" (UID: "47a4176b-5f58-47a9-a614-e5d05526da18"). InnerVolumeSpecName "kube-api-access-777ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.588872 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.588910 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-777ht\" (UniqueName: \"kubernetes.io/projected/47a4176b-5f58-47a9-a614-e5d05526da18-kube-api-access-777ht\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.616410 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47a4176b-5f58-47a9-a614-e5d05526da18" (UID: "47a4176b-5f58-47a9-a614-e5d05526da18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.674916 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cgqfl" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.675797 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cgqfl" event={"ID":"47a4176b-5f58-47a9-a614-e5d05526da18","Type":"ContainerDied","Data":"0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79"} Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.675849 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e87512b76ceddbcd36259192a7d06b2c50b8fc83a99c2a6ecdeee04f9de5d79" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.696204 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a4176b-5f58-47a9-a614-e5d05526da18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:18 crc kubenswrapper[4721]: E0202 13:24:18.780002 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.887838 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ccdcdf5fb-gncnr"] Feb 02 13:24:18 crc kubenswrapper[4721]: W0202 13:24:18.895638 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e3f4574_6ad6_4b37_abf5_2005c8692a44.slice/crio-bcb6dddc37c6d2d5129e13f487cd4b9ed27217cc44ad8a2254b576030ce32e69 WatchSource:0}: Error finding container bcb6dddc37c6d2d5129e13f487cd4b9ed27217cc44ad8a2254b576030ce32e69: Status 404 returned error can't find the container with id bcb6dddc37c6d2d5129e13f487cd4b9ed27217cc44ad8a2254b576030ce32e69 Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.965143 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d4595f9f9-4d2g5"] Feb 02 13:24:18 crc kubenswrapper[4721]: E0202 13:24:18.981351 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" containerName="barbican-db-sync" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.981387 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" containerName="barbican-db-sync" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.981604 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" containerName="barbican-db-sync" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.982799 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.986556 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.986571 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 13:24:18 crc kubenswrapper[4721]: I0202 13:24:18.987724 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-27rl5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.004169 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d4595f9f9-4d2g5"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.006143 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-combined-ca-bundle\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.006233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.006283 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data-custom\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.006427 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93a7211b-9a15-4765-99e2-520bd1d62ff1-logs\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.006596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jff5n\" (UniqueName: \"kubernetes.io/projected/93a7211b-9a15-4765-99e2-520bd1d62ff1-kube-api-access-jff5n\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.021915 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6f4497866b-px6fz"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.023882 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.036031 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.056488 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f4497866b-px6fz"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.108763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-combined-ca-bundle\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.108851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.108896 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data-custom\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.108997 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93a7211b-9a15-4765-99e2-520bd1d62ff1-logs\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109063 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data-custom\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109119 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-combined-ca-bundle\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109151 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109180 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcp4\" (UniqueName: \"kubernetes.io/projected/755b5957-fcfa-486a-8e63-d562742d6650-kube-api-access-sjcp4\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755b5957-fcfa-486a-8e63-d562742d6650-logs\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.109308 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jff5n\" (UniqueName: \"kubernetes.io/projected/93a7211b-9a15-4765-99e2-520bd1d62ff1-kube-api-access-jff5n\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.115942 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-combined-ca-bundle\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.116541 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93a7211b-9a15-4765-99e2-520bd1d62ff1-logs\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.119353 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.121841 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93a7211b-9a15-4765-99e2-520bd1d62ff1-config-data-custom\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.147800 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jff5n\" (UniqueName: \"kubernetes.io/projected/93a7211b-9a15-4765-99e2-520bd1d62ff1-kube-api-access-jff5n\") pod \"barbican-worker-d4595f9f9-4d2g5\" (UID: \"93a7211b-9a15-4765-99e2-520bd1d62ff1\") " pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.186376 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.188881 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.213386 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7rn\" (UniqueName: \"kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.213822 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.213887 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data-custom\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-combined-ca-bundle\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217642 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217673 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217705 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcp4\" (UniqueName: \"kubernetes.io/projected/755b5957-fcfa-486a-8e63-d562742d6650-kube-api-access-sjcp4\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217915 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755b5957-fcfa-486a-8e63-d562742d6650-logs\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.217982 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.218031 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.221631 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/755b5957-fcfa-486a-8e63-d562742d6650-logs\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.229862 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data-custom\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.239859 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-config-data\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.256047 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b5957-fcfa-486a-8e63-d562742d6650-combined-ca-bundle\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.269232 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcp4\" (UniqueName: \"kubernetes.io/projected/755b5957-fcfa-486a-8e63-d562742d6650-kube-api-access-sjcp4\") pod \"barbican-keystone-listener-6f4497866b-px6fz\" (UID: \"755b5957-fcfa-486a-8e63-d562742d6650\") " pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.276170 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320654 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320711 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320864 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.320900 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p7rn\" (UniqueName: \"kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.322315 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.329178 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.329778 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.329841 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.333187 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.339441 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d4595f9f9-4d2g5" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.365691 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p7rn\" (UniqueName: \"kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn\") pod \"dnsmasq-dns-85ff748b95-xjf7x\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.374142 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.380491 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.392314 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.392359 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.392382 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.393540 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.400389 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.423164 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.423246 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7tv9\" (UniqueName: \"kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.423360 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.423550 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.423653 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.474518 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.530227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.532817 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.532920 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7tv9\" (UniqueName: \"kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.533021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.533239 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.571788 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.587656 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.651920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.652148 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.654352 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7tv9\" (UniqueName: \"kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.680127 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle\") pod \"barbican-api-55dd659f54-28qsl\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.691140 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.731558 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerStarted","Data":"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699"} Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.732290 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="ceilometer-notification-agent" containerID="cri-o://532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae" gracePeriod=30 Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.732614 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.733013 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="proxy-httpd" containerID="cri-o://3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699" gracePeriod=30 Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.733096 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="sg-core" containerID="cri-o://4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d" gracePeriod=30 Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.742114 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.790564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerStarted","Data":"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a"} Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.794543 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ccdcdf5fb-gncnr" event={"ID":"8e3f4574-6ad6-4b37-abf5-2005c8692a44","Type":"ContainerStarted","Data":"bcb6dddc37c6d2d5129e13f487cd4b9ed27217cc44ad8a2254b576030ce32e69"} Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.829320 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerStarted","Data":"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de"} Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.829373 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.829387 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.902769 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mlbxn" podStartSLOduration=6.902242617 podStartE2EDuration="16.90274051s" podCreationTimestamp="2026-02-02 13:24:03 +0000 UTC" firstStartedPulling="2026-02-02 13:24:08.388863554 +0000 UTC m=+1388.691377943" lastFinishedPulling="2026-02-02 13:24:18.389361447 +0000 UTC m=+1398.691875836" observedRunningTime="2026-02-02 13:24:19.828566539 +0000 UTC m=+1400.131080938" watchObservedRunningTime="2026-02-02 13:24:19.90274051 +0000 UTC m=+1400.205254909" Feb 02 13:24:19 crc kubenswrapper[4721]: I0202 13:24:19.920958 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2blk" podStartSLOduration=3.906210125 podStartE2EDuration="13.920930724s" podCreationTimestamp="2026-02-02 13:24:06 +0000 UTC" firstStartedPulling="2026-02-02 13:24:08.409608406 +0000 UTC m=+1388.712122805" lastFinishedPulling="2026-02-02 13:24:18.424329015 +0000 UTC m=+1398.726843404" observedRunningTime="2026-02-02 13:24:19.864560865 +0000 UTC m=+1400.167075254" watchObservedRunningTime="2026-02-02 13:24:19.920930724 +0000 UTC m=+1400.223445113" Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.255294 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d4595f9f9-4d2g5"] Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.697938 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.710817 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f4497866b-px6fz"] Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.820417 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.849478 4721 generic.go:334] "Generic (PLEG): container finished" podID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerID="3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699" exitCode=0 Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.849865 4721 generic.go:334] "Generic (PLEG): container finished" podID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerID="4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d" exitCode=2 Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.850001 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerDied","Data":"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.850123 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerDied","Data":"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.852460 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" event={"ID":"d2fa85eb-972d-4369-8103-dd4cd3e2b78a","Type":"ContainerStarted","Data":"99da6e129c53d9990311d147953c7a4dfacacce031a5f1e7ffc090745f296f41"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.869683 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ccdcdf5fb-gncnr" event={"ID":"8e3f4574-6ad6-4b37-abf5-2005c8692a44","Type":"ContainerStarted","Data":"d560cca08f2f347ec63e253e602295448f38febe23beccfbe7f50b89b4a78d01"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.871369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerStarted","Data":"5366c1b45533360e87afcbc53d2e0646f4fb35a11a7eba0ff655b53f1aa8c34c"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.872419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" event={"ID":"755b5957-fcfa-486a-8e63-d562742d6650","Type":"ContainerStarted","Data":"0eceeef39bf1bbda63753bcc4682a336c5f016ce2fd121b0c3b2a127658126dd"} Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.880617 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:24:20 crc kubenswrapper[4721]: I0202 13:24:20.882755 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d4595f9f9-4d2g5" event={"ID":"93a7211b-9a15-4765-99e2-520bd1d62ff1","Type":"ContainerStarted","Data":"19578437d5b967a5d3ffcff4a282b024b837040179895aa6480087713455e07b"} Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.122547 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.122998 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d9f7977f-7dt9k" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-api" containerID="cri-o://da489fd3d8eceb37f5d84a7cb93a4298ee1e4fc025f5a8e61c61624f47e5cc8a" gracePeriod=30 Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.123759 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59d9f7977f-7dt9k" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" containerID="cri-o://8b6cdf8693a8045891752ca1818e301a94afd2ff81089e5e3f78d2e62be53c12" gracePeriod=30 Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.142631 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59d9f7977f-7dt9k" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9696/\": EOF" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.200218 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74cc678f5-fkzpw"] Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.202662 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.223819 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cc678f5-fkzpw"] Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.255787 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-internal-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.255942 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-ovndb-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.255972 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-httpd-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.256009 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-combined-ca-bundle\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.256036 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzqz\" (UniqueName: \"kubernetes.io/projected/40093ddb-a585-427d-88f6-110b4ea07578-kube-api-access-pbzqz\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.256128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-public-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.256275 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.358898 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359029 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-internal-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-ovndb-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-httpd-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-combined-ca-bundle\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzqz\" (UniqueName: \"kubernetes.io/projected/40093ddb-a585-427d-88f6-110b4ea07578-kube-api-access-pbzqz\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.359355 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-public-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.363897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-public-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.367005 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.371885 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-combined-ca-bundle\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.373572 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-internal-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.377323 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-ovndb-tls-certs\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.381684 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40093ddb-a585-427d-88f6-110b4ea07578-httpd-config\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.393937 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzqz\" (UniqueName: \"kubernetes.io/projected/40093ddb-a585-427d-88f6-110b4ea07578-kube-api-access-pbzqz\") pod \"neutron-74cc678f5-fkzpw\" (UID: \"40093ddb-a585-427d-88f6-110b4ea07578\") " pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.493720 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.960314 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:24:21 crc kubenswrapper[4721]: I0202 13:24:21.984488 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerStarted","Data":"a307dcbe4bd53270347a396194e4ad6fcdab051a06bc6fbbccd4da1332e2bfc7"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.038783 4721 generic.go:334] "Generic (PLEG): container finished" podID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerID="532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae" exitCode=0 Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.039192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerDied","Data":"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.039229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5","Type":"ContainerDied","Data":"5372b8d7305b4393c88e00abc4c50b4b02eb1dd6564a279f35710dd3d18e6691"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.039251 4721 scope.go:117] "RemoveContainer" containerID="3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.039441 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.076996 4721 generic.go:334] "Generic (PLEG): container finished" podID="92544741-12fa-42ac-ba5b-67179ec9443b" containerID="8b6cdf8693a8045891752ca1818e301a94afd2ff81089e5e3f78d2e62be53c12" exitCode=0 Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.077536 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerDied","Data":"8b6cdf8693a8045891752ca1818e301a94afd2ff81089e5e3f78d2e62be53c12"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.094946 4721 generic.go:334] "Generic (PLEG): container finished" podID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" containerID="0b784cae8dddc21c2c3af89d409032d5d888340b53c786c1dc27d600d257dd2b" exitCode=0 Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.095024 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n52pp" event={"ID":"9fa244a8-7588-4d87-bd5b-cbcd10780c83","Type":"ContainerDied","Data":"0b784cae8dddc21c2c3af89d409032d5d888340b53c786c1dc27d600d257dd2b"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.100104 4721 generic.go:334] "Generic (PLEG): container finished" podID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerID="67e81c0f84720634dc560e673791d8be5a3f4506fb68a6b94459043bb3aa0bc2" exitCode=0 Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.100190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" event={"ID":"d2fa85eb-972d-4369-8103-dd4cd3e2b78a","Type":"ContainerDied","Data":"67e81c0f84720634dc560e673791d8be5a3f4506fb68a6b94459043bb3aa0bc2"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108512 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108689 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108741 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbzr\" (UniqueName: \"kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108793 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108842 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108945 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.108989 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd\") pod \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\" (UID: \"1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5\") " Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.117170 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.119961 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts" (OuterVolumeSpecName: "scripts") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.120357 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.129054 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr" (OuterVolumeSpecName: "kube-api-access-fbbzr") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "kube-api-access-fbbzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.143294 4721 scope.go:117] "RemoveContainer" containerID="4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.153766 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ccdcdf5fb-gncnr" event={"ID":"8e3f4574-6ad6-4b37-abf5-2005c8692a44","Type":"ContainerStarted","Data":"e41e35e5f4a276a302bdac0022a1e859573103aaa2f1dd17bb86e728220bdc00"} Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.153813 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.153847 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.155118 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.155144 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.216481 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.216844 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbzr\" (UniqueName: \"kubernetes.io/projected/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-kube-api-access-fbbzr\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.216955 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.217054 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.242667 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.263594 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.320683 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.320746 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.379340 4721 scope.go:117] "RemoveContainer" containerID="532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.441891 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data" (OuterVolumeSpecName: "config-data") pod "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" (UID: "1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.503594 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6ccdcdf5fb-gncnr" podStartSLOduration=10.503569338 podStartE2EDuration="10.503569338s" podCreationTimestamp="2026-02-02 13:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:22.232057354 +0000 UTC m=+1402.534571763" watchObservedRunningTime="2026-02-02 13:24:22.503569338 +0000 UTC m=+1402.806083737" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.508216 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cc678f5-fkzpw"] Feb 02 13:24:22 crc kubenswrapper[4721]: W0202 13:24:22.518429 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40093ddb_a585_427d_88f6_110b4ea07578.slice/crio-a586a050b289fbafeb1602921c5a9696cfc00666e5e7b8ca13447ea3c8fe03e1 WatchSource:0}: Error finding container a586a050b289fbafeb1602921c5a9696cfc00666e5e7b8ca13447ea3c8fe03e1: Status 404 returned error can't find the container with id a586a050b289fbafeb1602921c5a9696cfc00666e5e7b8ca13447ea3c8fe03e1 Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.527666 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.745274 4721 scope.go:117] "RemoveContainer" containerID="3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699" Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.747784 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699\": container with ID starting with 3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699 not found: ID does not exist" containerID="3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.747844 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699"} err="failed to get container status \"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699\": rpc error: code = NotFound desc = could not find container \"3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699\": container with ID starting with 3192e734a06fcd9f8d184e01215add4fd33d5d9771b0e9c27a5f5edaae278699 not found: ID does not exist" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.747879 4721 scope.go:117] "RemoveContainer" containerID="4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d" Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.748300 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d\": container with ID starting with 4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d not found: ID does not exist" containerID="4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.748344 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d"} err="failed to get container status \"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d\": rpc error: code = NotFound desc = could not find container \"4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d\": container with ID starting with 4ae676a120a0891127e9d7af3640c4f6801bac80535f6452a01c2c0a7c50780d not found: ID does not exist" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.748400 4721 scope.go:117] "RemoveContainer" containerID="532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae" Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.751654 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae\": container with ID starting with 532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae not found: ID does not exist" containerID="532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.751694 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae"} err="failed to get container status \"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae\": rpc error: code = NotFound desc = could not find container \"532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae\": container with ID starting with 532a8ed94d348484657193a2554795492b450f7f802dc97b0881265e4ef935ae not found: ID does not exist" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.781246 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.802307 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.819088 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.819934 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="proxy-httpd" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.819971 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="proxy-httpd" Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.819993 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="ceilometer-notification-agent" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.819999 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="ceilometer-notification-agent" Feb 02 13:24:22 crc kubenswrapper[4721]: E0202 13:24:22.820008 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="sg-core" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.820028 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="sg-core" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.820338 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="sg-core" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.820362 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="ceilometer-notification-agent" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.820376 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" containerName="proxy-httpd" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.823651 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.826219 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.826622 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.831936 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954598 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954661 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954711 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mxr\" (UniqueName: \"kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954795 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954847 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954884 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:22 crc kubenswrapper[4721]: I0202 13:24:22.954910 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.058635 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.058754 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.058805 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.058827 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.059018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.059088 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.059155 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mxr\" (UniqueName: \"kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.065999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.068138 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.070777 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.073150 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.074182 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.074510 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.082295 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mxr\" (UniqueName: \"kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr\") pod \"ceilometer-0\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.164676 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.195185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" event={"ID":"d2fa85eb-972d-4369-8103-dd4cd3e2b78a","Type":"ContainerStarted","Data":"7517327cffd1477bc363977d8634c4cb01f4fc58a61eaaad4127a09cf6a0605e"} Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.195262 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.216510 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerStarted","Data":"0d5c49970e3de4e3928ae616ad70c665a088d4d4722ff49627ed715494d6560a"} Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.217335 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.217542 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.237121 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59d9f7977f-7dt9k" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.196:9696/\": dial tcp 10.217.0.196:9696: connect: connection refused" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.245270 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cc678f5-fkzpw" event={"ID":"40093ddb-a585-427d-88f6-110b4ea07578","Type":"ContainerStarted","Data":"7f9687cede0a9de14646ab0f986c731284c501c40e9a386c0933f293af5436a2"} Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.245334 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cc678f5-fkzpw" event={"ID":"40093ddb-a585-427d-88f6-110b4ea07578","Type":"ContainerStarted","Data":"a586a050b289fbafeb1602921c5a9696cfc00666e5e7b8ca13447ea3c8fe03e1"} Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.311885 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" podStartSLOduration=4.311866229 podStartE2EDuration="4.311866229s" podCreationTimestamp="2026-02-02 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:23.289491522 +0000 UTC m=+1403.592005921" watchObservedRunningTime="2026-02-02 13:24:23.311866229 +0000 UTC m=+1403.614380618" Feb 02 13:24:23 crc kubenswrapper[4721]: I0202 13:24:23.335606 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55dd659f54-28qsl" podStartSLOduration=4.335578722 podStartE2EDuration="4.335578722s" podCreationTimestamp="2026-02-02 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:23.33143535 +0000 UTC m=+1403.633949739" watchObservedRunningTime="2026-02-02 13:24:23.335578722 +0000 UTC m=+1403.638093131" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.024175 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.024550 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.186323 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.207823 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.207982 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.308646 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.326575 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cc678f5-fkzpw" event={"ID":"40093ddb-a585-427d-88f6-110b4ea07578","Type":"ContainerStarted","Data":"c993532ad5f15f60539ee35f7980604943df19a8ea9995c5b9dab7b406aa3121"} Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.327504 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.407321 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74cc678f5-fkzpw" podStartSLOduration=3.407296809 podStartE2EDuration="3.407296809s" podCreationTimestamp="2026-02-02 13:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:24.380488391 +0000 UTC m=+1404.683002780" watchObservedRunningTime="2026-02-02 13:24:24.407296809 +0000 UTC m=+1404.709811198" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.452735 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5" path="/var/lib/kubelet/pods/1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5/volumes" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.508080 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57c58bbb98-gpbp2"] Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.510539 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.516639 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.517284 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.521931 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57c58bbb98-gpbp2"] Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxcv\" (UniqueName: \"kubernetes.io/projected/183927fe-ec27-461b-8284-3e71f5cb666a-kube-api-access-kbxcv\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554303 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data-custom\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554446 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-internal-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-public-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554671 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183927fe-ec27-461b-8284-3e71f5cb666a-logs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.554931 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-combined-ca-bundle\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.657490 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-combined-ca-bundle\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.657661 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxcv\" (UniqueName: \"kubernetes.io/projected/183927fe-ec27-461b-8284-3e71f5cb666a-kube-api-access-kbxcv\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.657710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.657744 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data-custom\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.659670 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-internal-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.659906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-public-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.660180 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183927fe-ec27-461b-8284-3e71f5cb666a-logs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.661422 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183927fe-ec27-461b-8284-3e71f5cb666a-logs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.665884 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-internal-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.668847 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-combined-ca-bundle\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.681129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxcv\" (UniqueName: \"kubernetes.io/projected/183927fe-ec27-461b-8284-3e71f5cb666a-kube-api-access-kbxcv\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.689401 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.698425 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-config-data-custom\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.711698 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/183927fe-ec27-461b-8284-3e71f5cb666a-public-tls-certs\") pod \"barbican-api-57c58bbb98-gpbp2\" (UID: \"183927fe-ec27-461b-8284-3e71f5cb666a\") " pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:24 crc kubenswrapper[4721]: I0202 13:24:24.858048 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:25 crc kubenswrapper[4721]: I0202 13:24:25.111272 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:25 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:25 crc kubenswrapper[4721]: > Feb 02 13:24:25 crc kubenswrapper[4721]: I0202 13:24:25.290564 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:25 crc kubenswrapper[4721]: I0202 13:24:25.290685 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:24:25 crc kubenswrapper[4721]: I0202 13:24:25.293564 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:24:26 crc kubenswrapper[4721]: W0202 13:24:26.294452 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1482d2e_b885_44bd_b679_109f0b9698ea.slice/crio-c455b716179204c8c0342b961a933847d0497bb7d80b63f7c4dd9a07665bceb2 WatchSource:0}: Error finding container c455b716179204c8c0342b961a933847d0497bb7d80b63f7c4dd9a07665bceb2: Status 404 returned error can't find the container with id c455b716179204c8c0342b961a933847d0497bb7d80b63f7c4dd9a07665bceb2 Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.427624 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n52pp" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.459674 4721 generic.go:334] "Generic (PLEG): container finished" podID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" containerID="ce2db44950c758448aaea5320ccdad1fe422fd10d5dc9377dff5887076136a7a" exitCode=0 Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.459750 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7wjxh" event={"ID":"ad3578ef-5d1b-4c52-939c-237feadc1c5c","Type":"ContainerDied","Data":"ce2db44950c758448aaea5320ccdad1fe422fd10d5dc9377dff5887076136a7a"} Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.467678 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerStarted","Data":"c455b716179204c8c0342b961a933847d0497bb7d80b63f7c4dd9a07665bceb2"} Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.484413 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n52pp" event={"ID":"9fa244a8-7588-4d87-bd5b-cbcd10780c83","Type":"ContainerDied","Data":"f8c8f50f7ba68fd272f974a962b3d958bc797bb63b6619f9ead2e2ffc4525a32"} Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.484488 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c8f50f7ba68fd272f974a962b3d958bc797bb63b6619f9ead2e2ffc4525a32" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.484441 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n52pp" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.577614 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle\") pod \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.577705 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ct7f\" (UniqueName: \"kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f\") pod \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.577984 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data\") pod \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\" (UID: \"9fa244a8-7588-4d87-bd5b-cbcd10780c83\") " Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.594269 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f" (OuterVolumeSpecName: "kube-api-access-2ct7f") pod "9fa244a8-7588-4d87-bd5b-cbcd10780c83" (UID: "9fa244a8-7588-4d87-bd5b-cbcd10780c83"). InnerVolumeSpecName "kube-api-access-2ct7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.600859 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.601594 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.678699 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa244a8-7588-4d87-bd5b-cbcd10780c83" (UID: "9fa244a8-7588-4d87-bd5b-cbcd10780c83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.688483 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.688866 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ct7f\" (UniqueName: \"kubernetes.io/projected/9fa244a8-7588-4d87-bd5b-cbcd10780c83-kube-api-access-2ct7f\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.773876 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data" (OuterVolumeSpecName: "config-data") pod "9fa244a8-7588-4d87-bd5b-cbcd10780c83" (UID: "9fa244a8-7588-4d87-bd5b-cbcd10780c83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:26 crc kubenswrapper[4721]: I0202 13:24:26.797952 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa244a8-7588-4d87-bd5b-cbcd10780c83-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.066284 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57c58bbb98-gpbp2"] Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.506161 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" event={"ID":"755b5957-fcfa-486a-8e63-d562742d6650","Type":"ContainerStarted","Data":"c0b11b4c5c6c5c0be5728856fe317c744135fa5d6f051bf89ae0e65d2022026b"} Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.509946 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d4595f9f9-4d2g5" event={"ID":"93a7211b-9a15-4765-99e2-520bd1d62ff1","Type":"ContainerStarted","Data":"ae92c461b5387d32c6a43c57e5ab0b32b352f0575d859542465440d2aee9a089"} Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.512780 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c58bbb98-gpbp2" event={"ID":"183927fe-ec27-461b-8284-3e71f5cb666a","Type":"ContainerStarted","Data":"033d1d8f9a9608dee54f5aa6ae922d818fb5bccfe3b340973427c50aa9ea1e38"} Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.512836 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c58bbb98-gpbp2" event={"ID":"183927fe-ec27-461b-8284-3e71f5cb666a","Type":"ContainerStarted","Data":"eb604f48b8181a624c73b1e2c40cb06a20997da36f4a8dffb4a69604e13e0240"} Feb 02 13:24:27 crc kubenswrapper[4721]: I0202 13:24:27.690715 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b2blk" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:27 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:27 crc kubenswrapper[4721]: > Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.035642 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.150602 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.150735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.150759 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.150857 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.150902 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.151036 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4b6x\" (UniqueName: \"kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x\") pod \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\" (UID: \"ad3578ef-5d1b-4c52-939c-237feadc1c5c\") " Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.151648 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.159859 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.165204 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x" (OuterVolumeSpecName: "kube-api-access-c4b6x") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "kube-api-access-c4b6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.171058 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts" (OuterVolumeSpecName: "scripts") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.197206 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.226270 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data" (OuterVolumeSpecName: "config-data") pod "ad3578ef-5d1b-4c52-939c-237feadc1c5c" (UID: "ad3578ef-5d1b-4c52-939c-237feadc1c5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254668 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254721 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad3578ef-5d1b-4c52-939c-237feadc1c5c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254731 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254740 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254750 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3578ef-5d1b-4c52-939c-237feadc1c5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.254758 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4b6x\" (UniqueName: \"kubernetes.io/projected/ad3578ef-5d1b-4c52-939c-237feadc1c5c-kube-api-access-c4b6x\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.531166 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c58bbb98-gpbp2" event={"ID":"183927fe-ec27-461b-8284-3e71f5cb666a","Type":"ContainerStarted","Data":"2f3c5eefde9602b1bed1df089a10bcc66e2a0b0c453b9fb2dba2991fd6188739"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.533032 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.533215 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.534580 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d4595f9f9-4d2g5" event={"ID":"93a7211b-9a15-4765-99e2-520bd1d62ff1","Type":"ContainerStarted","Data":"37b5a3e1dc48437eb0809f4f006bad5aafb8f0c93896c78d9e286c2a4981f720"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.551383 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" event={"ID":"755b5957-fcfa-486a-8e63-d562742d6650","Type":"ContainerStarted","Data":"77fe2816980c07cff5664313909daa4e5208b233a25d31fe97f870e856ead996"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.565971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7wjxh" event={"ID":"ad3578ef-5d1b-4c52-939c-237feadc1c5c","Type":"ContainerDied","Data":"85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.566009 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85181cad5536c1551bbbb525ba3a357b7e769080b51a0a4639fbdd8c37e6d7bd" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.566078 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7wjxh" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.568324 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57c58bbb98-gpbp2" podStartSLOduration=4.568299869 podStartE2EDuration="4.568299869s" podCreationTimestamp="2026-02-02 13:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:28.561180746 +0000 UTC m=+1408.863695135" watchObservedRunningTime="2026-02-02 13:24:28.568299869 +0000 UTC m=+1408.870814258" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.578834 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerStarted","Data":"0b3cb07d111bcefe4cce6e8cdec0586b864539201a7ae3eef461e7791d1c123a"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.578888 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerStarted","Data":"e7a1435ecc8600326126b2f72cf42852a2ff26324fb369f144ca54a8e7a02fe8"} Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.594549 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6f4497866b-px6fz" podStartSLOduration=4.72115542 podStartE2EDuration="10.594530041s" podCreationTimestamp="2026-02-02 13:24:18 +0000 UTC" firstStartedPulling="2026-02-02 13:24:20.660564284 +0000 UTC m=+1400.963078673" lastFinishedPulling="2026-02-02 13:24:26.533938905 +0000 UTC m=+1406.836453294" observedRunningTime="2026-02-02 13:24:28.591435967 +0000 UTC m=+1408.893950356" watchObservedRunningTime="2026-02-02 13:24:28.594530041 +0000 UTC m=+1408.897044430" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.640872 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d4595f9f9-4d2g5" podStartSLOduration=4.450291863 podStartE2EDuration="10.640847607s" podCreationTimestamp="2026-02-02 13:24:18 +0000 UTC" firstStartedPulling="2026-02-02 13:24:20.343144815 +0000 UTC m=+1400.645659214" lastFinishedPulling="2026-02-02 13:24:26.533700569 +0000 UTC m=+1406.836214958" observedRunningTime="2026-02-02 13:24:28.618892312 +0000 UTC m=+1408.921406711" watchObservedRunningTime="2026-02-02 13:24:28.640847607 +0000 UTC m=+1408.943362016" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.855000 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:28 crc kubenswrapper[4721]: E0202 13:24:28.855734 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" containerName="heat-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.855806 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" containerName="heat-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: E0202 13:24:28.855909 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" containerName="cinder-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.855966 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" containerName="cinder-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.856305 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" containerName="cinder-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.856405 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" containerName="heat-db-sync" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.859594 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.870994 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.871371 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v76dv" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.871588 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.877259 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.896703 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.982919 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.983230 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.983415 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8v7w\" (UniqueName: \"kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.983556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.983706 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.983869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.984685 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.985177 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="dnsmasq-dns" containerID="cri-o://7517327cffd1477bc363977d8634c4cb01f4fc58a61eaaad4127a09cf6a0605e" gracePeriod=10 Feb 02 13:24:28 crc kubenswrapper[4721]: I0202 13:24:28.992703 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.054150 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.075145 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087406 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087429 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8v7w\" (UniqueName: \"kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087458 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.087547 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.088027 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.095742 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.112292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.116647 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.139045 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8v7w\" (UniqueName: \"kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205269 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205482 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpkt\" (UniqueName: \"kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205623 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205663 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205711 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.205884 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.228443 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts\") pod \"cinder-scheduler-0\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324406 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324530 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kpkt\" (UniqueName: \"kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324616 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324642 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324673 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.324758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.332442 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.333027 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.337833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.339867 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.352136 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.359341 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.363533 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kpkt\" (UniqueName: \"kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt\") pod \"dnsmasq-dns-5c9776ccc5-9czmj\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.446416 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.449230 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.504008 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.507664 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.511305 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.545449 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.545601 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.545705 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.545736 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.553451 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8k67\" (UniqueName: \"kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.553627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.553849 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.578488 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.630374 4721 generic.go:334] "Generic (PLEG): container finished" podID="92544741-12fa-42ac-ba5b-67179ec9443b" containerID="da489fd3d8eceb37f5d84a7cb93a4298ee1e4fc025f5a8e61c61624f47e5cc8a" exitCode=0 Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.630475 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerDied","Data":"da489fd3d8eceb37f5d84a7cb93a4298ee1e4fc025f5a8e61c61624f47e5cc8a"} Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.655673 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.655813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.655889 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.655955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.655973 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8k67\" (UniqueName: \"kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.656026 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.656119 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.667920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.684758 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.693591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.706734 4721 generic.go:334] "Generic (PLEG): container finished" podID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerID="7517327cffd1477bc363977d8634c4cb01f4fc58a61eaaad4127a09cf6a0605e" exitCode=0 Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.706939 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" event={"ID":"d2fa85eb-972d-4369-8103-dd4cd3e2b78a","Type":"ContainerDied","Data":"7517327cffd1477bc363977d8634c4cb01f4fc58a61eaaad4127a09cf6a0605e"} Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.720062 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.722714 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8k67\" (UniqueName: \"kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.731371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.733286 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " pod="openstack/cinder-api-0" Feb 02 13:24:29 crc kubenswrapper[4721]: I0202 13:24:29.986523 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.117663 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194179 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194217 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p7rn\" (UniqueName: \"kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194392 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194487 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.194575 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb\") pod \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\" (UID: \"d2fa85eb-972d-4369-8103-dd4cd3e2b78a\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.238331 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn" (OuterVolumeSpecName: "kube-api-access-5p7rn") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "kube-api-access-5p7rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.300621 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p7rn\" (UniqueName: \"kubernetes.io/projected/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-kube-api-access-5p7rn\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.326762 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config" (OuterVolumeSpecName: "config") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.330592 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.368056 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.372738 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.403679 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.403708 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.403722 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.403731 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.448165 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2fa85eb-972d-4369-8103-dd4cd3e2b78a" (UID: "d2fa85eb-972d-4369-8103-dd4cd3e2b78a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.506909 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fa85eb-972d-4369-8103-dd4cd3e2b78a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.713539 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.757121 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d9f7977f-7dt9k" event={"ID":"92544741-12fa-42ac-ba5b-67179ec9443b","Type":"ContainerDied","Data":"f6156b432d02a8811d9c46ced7a52980aa20788c34415c25f1c59b54dac366c9"} Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.757201 4721 scope.go:117] "RemoveContainer" containerID="8b6cdf8693a8045891752ca1818e301a94afd2ff81089e5e3f78d2e62be53c12" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.757426 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d9f7977f-7dt9k" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.776816 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.776834 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" event={"ID":"d2fa85eb-972d-4369-8103-dd4cd3e2b78a","Type":"ContainerDied","Data":"99da6e129c53d9990311d147953c7a4dfacacce031a5f1e7ffc090745f296f41"} Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.788889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerStarted","Data":"4d53de923a6215191fa0ab435e823d19598bea6ec82ecee59ac252608725c58c"} Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817369 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817456 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zxkh\" (UniqueName: \"kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817604 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817716 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817848 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.817926 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs\") pod \"92544741-12fa-42ac-ba5b-67179ec9443b\" (UID: \"92544741-12fa-42ac-ba5b-67179ec9443b\") " Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.829328 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh" (OuterVolumeSpecName: "kube-api-access-9zxkh") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "kube-api-access-9zxkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.835476 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.858947 4721 scope.go:117] "RemoveContainer" containerID="da489fd3d8eceb37f5d84a7cb93a4298ee1e4fc025f5a8e61c61624f47e5cc8a" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.871394 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.874714 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-xjf7x"] Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.924605 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.924657 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zxkh\" (UniqueName: \"kubernetes.io/projected/92544741-12fa-42ac-ba5b-67179ec9443b-kube-api-access-9zxkh\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:30 crc kubenswrapper[4721]: I0202 13:24:30.996502 4721 scope.go:117] "RemoveContainer" containerID="7517327cffd1477bc363977d8634c4cb01f4fc58a61eaaad4127a09cf6a0605e" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.022396 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.026643 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.036571 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.060220 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.083611 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.103385 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.113967 4721 scope.go:117] "RemoveContainer" containerID="67e81c0f84720634dc560e673791d8be5a3f4506fb68a6b94459043bb3aa0bc2" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.118046 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config" (OuterVolumeSpecName: "config") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.130675 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.130710 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.130724 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.148147 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "92544741-12fa-42ac-ba5b-67179ec9443b" (UID: "92544741-12fa-42ac-ba5b-67179ec9443b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.232895 4721 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92544741-12fa-42ac-ba5b-67179ec9443b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.276509 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.667142 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.768652 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59d9f7977f-7dt9k"] Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.837672 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" event={"ID":"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f","Type":"ContainerStarted","Data":"0e5ec1ece5d46afcb7f1cd43125f924d0c21a100ba9e55bfc9c2436d4917aef7"} Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.866296 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerStarted","Data":"308b644f1394a2cc3225f510fa06697b0a8ae1e9f8e2aa6c15bae4c005148f01"} Feb 02 13:24:31 crc kubenswrapper[4721]: I0202 13:24:31.883013 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerStarted","Data":"722b50e74b8986a87887949a03c96fc3de9ed0d41b61df9c48e69c902703c27b"} Feb 02 13:24:32 crc kubenswrapper[4721]: I0202 13:24:32.444254 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" path="/var/lib/kubelet/pods/92544741-12fa-42ac-ba5b-67179ec9443b/volumes" Feb 02 13:24:32 crc kubenswrapper[4721]: I0202 13:24:32.445196 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" path="/var/lib/kubelet/pods/d2fa85eb-972d-4369-8103-dd4cd3e2b78a/volumes" Feb 02 13:24:32 crc kubenswrapper[4721]: I0202 13:24:32.719043 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:24:32 crc kubenswrapper[4721]: I0202 13:24:32.902346 4721 generic.go:334] "Generic (PLEG): container finished" podID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerID="4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505" exitCode=0 Feb 02 13:24:32 crc kubenswrapper[4721]: I0202 13:24:32.902405 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" event={"ID":"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f","Type":"ContainerDied","Data":"4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505"} Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.855182 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.207:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.858776 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.207:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.866416 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.972145 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerStarted","Data":"f6ced5c029fbb0ced62ee14010b08872173b1b7d00f61d4cb14f85035de775b1"} Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.973460 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.976526 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" event={"ID":"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f","Type":"ContainerStarted","Data":"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3"} Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.977376 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.979147 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerStarted","Data":"d6fc3e691cab5185ee5e5e86d01fdac6b5e28431f28e13143ff03ccfe61af6f1"} Feb 02 13:24:33 crc kubenswrapper[4721]: I0202 13:24:33.980593 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerStarted","Data":"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681"} Feb 02 13:24:34 crc kubenswrapper[4721]: I0202 13:24:34.015455 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.901810154 podStartE2EDuration="12.015436621s" podCreationTimestamp="2026-02-02 13:24:22 +0000 UTC" firstStartedPulling="2026-02-02 13:24:26.307430223 +0000 UTC m=+1406.609944612" lastFinishedPulling="2026-02-02 13:24:32.42105668 +0000 UTC m=+1412.723571079" observedRunningTime="2026-02-02 13:24:33.994921264 +0000 UTC m=+1414.297435653" watchObservedRunningTime="2026-02-02 13:24:34.015436621 +0000 UTC m=+1414.317951010" Feb 02 13:24:34 crc kubenswrapper[4721]: I0202 13:24:34.026012 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" podStartSLOduration=6.025994617 podStartE2EDuration="6.025994617s" podCreationTimestamp="2026-02-02 13:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:34.022097612 +0000 UTC m=+1414.324612011" watchObservedRunningTime="2026-02-02 13:24:34.025994617 +0000 UTC m=+1414.328508996" Feb 02 13:24:34 crc kubenswrapper[4721]: I0202 13:24:34.603921 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:34 crc kubenswrapper[4721]: I0202 13:24:34.693907 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-xjf7x" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: i/o timeout" Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.049385 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerStarted","Data":"f2ec4bfdada8e2932f35d145116d0bf335c4ba0ff8527d320662aec6daa98921"} Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.071327 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerStarted","Data":"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0"} Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.071374 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api-log" containerID="cri-o://e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681" gracePeriod=30 Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.071508 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api" containerID="cri-o://014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0" gracePeriod=30 Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.071789 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.086225 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.068730596 podStartE2EDuration="7.086199771s" podCreationTimestamp="2026-02-02 13:24:28 +0000 UTC" firstStartedPulling="2026-02-02 13:24:31.041835724 +0000 UTC m=+1411.344350113" lastFinishedPulling="2026-02-02 13:24:32.059304899 +0000 UTC m=+1412.361819288" observedRunningTime="2026-02-02 13:24:35.075391577 +0000 UTC m=+1415.377905976" watchObservedRunningTime="2026-02-02 13:24:35.086199771 +0000 UTC m=+1415.388714160" Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.109120 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.109097392 podStartE2EDuration="6.109097392s" podCreationTimestamp="2026-02-02 13:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:35.099506471 +0000 UTC m=+1415.402020860" watchObservedRunningTime="2026-02-02 13:24:35.109097392 +0000 UTC m=+1415.411611821" Feb 02 13:24:35 crc kubenswrapper[4721]: I0202 13:24:35.122576 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:35 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:35 crc kubenswrapper[4721]: > Feb 02 13:24:36 crc kubenswrapper[4721]: I0202 13:24:36.118005 4721 generic.go:334] "Generic (PLEG): container finished" podID="a6048763-9be8-4530-b02a-78022c20d668" containerID="e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681" exitCode=143 Feb 02 13:24:36 crc kubenswrapper[4721]: I0202 13:24:36.118218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerDied","Data":"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681"} Feb 02 13:24:37 crc kubenswrapper[4721]: I0202 13:24:37.676838 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b2blk" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:37 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:37 crc kubenswrapper[4721]: > Feb 02 13:24:37 crc kubenswrapper[4721]: I0202 13:24:37.984756 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:38 crc kubenswrapper[4721]: I0202 13:24:38.461910 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-784866f846-pjz9x" Feb 02 13:24:38 crc kubenswrapper[4721]: I0202 13:24:38.646994 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57c58bbb98-gpbp2" Feb 02 13:24:38 crc kubenswrapper[4721]: I0202 13:24:38.730431 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:38 crc kubenswrapper[4721]: I0202 13:24:38.730731 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" containerID="cri-o://a307dcbe4bd53270347a396194e4ad6fcdab051a06bc6fbbccd4da1332e2bfc7" gracePeriod=30 Feb 02 13:24:38 crc kubenswrapper[4721]: I0202 13:24:38.730889 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" containerID="cri-o://0d5c49970e3de4e3928ae616ad70c665a088d4d4722ff49627ed715494d6560a" gracePeriod=30 Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.180864 4721 generic.go:334] "Generic (PLEG): container finished" podID="0f119900-0b52-425a-be0a-0940a4747f89" containerID="a307dcbe4bd53270347a396194e4ad6fcdab051a06bc6fbbccd4da1332e2bfc7" exitCode=143 Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.181249 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerDied","Data":"a307dcbe4bd53270347a396194e4ad6fcdab051a06bc6fbbccd4da1332e2bfc7"} Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.512282 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.581235 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.682449 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:24:39 crc kubenswrapper[4721]: I0202 13:24:39.682685 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="dnsmasq-dns" containerID="cri-o://38beb737c04426dba147dffdfeaee7f338c6927e4b1cdcf880429b179ca9988b" gracePeriod=10 Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.056706 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.193078 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerID="38beb737c04426dba147dffdfeaee7f338c6927e4b1cdcf880429b179ca9988b" exitCode=0 Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.193417 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" event={"ID":"4b323e62-7a54-4935-8e47-2df809ecb2f9","Type":"ContainerDied","Data":"38beb737c04426dba147dffdfeaee7f338c6927e4b1cdcf880429b179ca9988b"} Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.298648 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.631625 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.727368 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728258 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728304 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728318 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="init" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728328 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="init" Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728356 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728365 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728383 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728390 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728391 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728493 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728409 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-api" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728542 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-api" Feb 02 13:24:40 crc kubenswrapper[4721]: E0202 13:24:40.728621 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="init" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.728631 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="init" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.729205 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-api" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.729251 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fa85eb-972d-4369-8103-dd4cd3e2b78a" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.729285 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" containerName="dnsmasq-dns" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.729297 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="92544741-12fa-42ac-ba5b-67179ec9443b" containerName="neutron-httpd" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.730402 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.731592 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.731666 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.731753 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.731830 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcdcl\" (UniqueName: \"kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl\") pod \"4b323e62-7a54-4935-8e47-2df809ecb2f9\" (UID: \"4b323e62-7a54-4935-8e47-2df809ecb2f9\") " Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.739772 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.741221 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hl2f5" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.741413 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.801058 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.813284 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl" (OuterVolumeSpecName: "kube-api-access-gcdcl") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "kube-api-access-gcdcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.835291 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.835361 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.835425 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.835469 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4zs\" (UniqueName: \"kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.835877 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.836088 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcdcl\" (UniqueName: \"kubernetes.io/projected/4b323e62-7a54-4935-8e47-2df809ecb2f9-kube-api-access-gcdcl\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.841742 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.855883 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config" (OuterVolumeSpecName: "config") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.859421 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.893234 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b323e62-7a54-4935-8e47-2df809ecb2f9" (UID: "4b323e62-7a54-4935-8e47-2df809ecb2f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.937887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4zs\" (UniqueName: \"kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938163 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938226 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938290 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938302 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938312 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938321 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.938470 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b323e62-7a54-4935-8e47-2df809ecb2f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.939213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.942227 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.942971 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:40 crc kubenswrapper[4721]: I0202 13:24:40.984041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4zs\" (UniqueName: \"kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs\") pod \"openstackclient\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.041099 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.042205 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.074473 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.138322 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.140371 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.190224 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.263656 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="cinder-scheduler" containerID="cri-o://d6fc3e691cab5185ee5e5e86d01fdac6b5e28431f28e13143ff03ccfe61af6f1" gracePeriod=30 Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.264039 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="probe" containerID="cri-o://f2ec4bfdada8e2932f35d145116d0bf335c4ba0ff8527d320662aec6daa98921" gracePeriod=30 Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.264160 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.264184 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-mnq9d" event={"ID":"4b323e62-7a54-4935-8e47-2df809ecb2f9","Type":"ContainerDied","Data":"2496bbb6bf1b14291e279777fda509f8bd82ead08412e32b9ec7b5156bb292a2"} Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.264920 4721 scope.go:117] "RemoveContainer" containerID="38beb737c04426dba147dffdfeaee7f338c6927e4b1cdcf880429b179ca9988b" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.329958 4721 scope.go:117] "RemoveContainer" containerID="ece0ac3461ccc45508567010797bf0f995740c80a06e3f1018effd1949e58be1" Feb 02 13:24:41 crc kubenswrapper[4721]: E0202 13:24:41.335442 4721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 02 13:24:41 crc kubenswrapper[4721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7588b1fc-b958-40e3-bec2-abc209c1a802_0(ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5" Netns:"/var/run/netns/6b2c382b-0ca6-4b76-a643-497fa6d23186" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5;K8S_POD_UID=7588b1fc-b958-40e3-bec2-abc209c1a802" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7588b1fc-b958-40e3-bec2-abc209c1a802]: expected pod UID "7588b1fc-b958-40e3-bec2-abc209c1a802" but got "32729b18-a175-4abd-a8cf-392d318b64d8" from Kube API Feb 02 13:24:41 crc kubenswrapper[4721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 02 13:24:41 crc kubenswrapper[4721]: > Feb 02 13:24:41 crc kubenswrapper[4721]: E0202 13:24:41.335492 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 02 13:24:41 crc kubenswrapper[4721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7588b1fc-b958-40e3-bec2-abc209c1a802_0(ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5" Netns:"/var/run/netns/6b2c382b-0ca6-4b76-a643-497fa6d23186" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=ef60e78374690088d8e634fbbdd05b55f2d30a3b90c095b2db8f72cec0e0c2a5;K8S_POD_UID=7588b1fc-b958-40e3-bec2-abc209c1a802" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7588b1fc-b958-40e3-bec2-abc209c1a802]: expected pod UID "7588b1fc-b958-40e3-bec2-abc209c1a802" but got "32729b18-a175-4abd-a8cf-392d318b64d8" from Kube API Feb 02 13:24:41 crc kubenswrapper[4721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 02 13:24:41 crc kubenswrapper[4721]: > pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.351749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.352401 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.352577 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.352917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kbsn\" (UniqueName: \"kubernetes.io/projected/32729b18-a175-4abd-a8cf-392d318b64d8-kube-api-access-2kbsn\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.423257 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.440404 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-mnq9d"] Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.454828 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.454899 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.454967 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.455039 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kbsn\" (UniqueName: \"kubernetes.io/projected/32729b18-a175-4abd-a8cf-392d318b64d8-kube-api-access-2kbsn\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.455955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.464503 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-openstack-config-secret\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.464514 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32729b18-a175-4abd-a8cf-392d318b64d8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.473899 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kbsn\" (UniqueName: \"kubernetes.io/projected/32729b18-a175-4abd-a8cf-392d318b64d8-kube-api-access-2kbsn\") pod \"openstackclient\" (UID: \"32729b18-a175-4abd-a8cf-392d318b64d8\") " pod="openstack/openstackclient" Feb 02 13:24:41 crc kubenswrapper[4721]: I0202 13:24:41.555496 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.226497 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.207:9311/healthcheck\": read tcp 10.217.0.2:36096->10.217.0.207:9311: read: connection reset by peer" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.226563 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55dd659f54-28qsl" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.207:9311/healthcheck\": read tcp 10.217.0.2:36084->10.217.0.207:9311: read: connection reset by peer" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.289104 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dce367e-6a22-454b-bd02-4a69a739af22" containerID="f2ec4bfdada8e2932f35d145116d0bf335c4ba0ff8527d320662aec6daa98921" exitCode=0 Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.289147 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dce367e-6a22-454b-bd02-4a69a739af22" containerID="d6fc3e691cab5185ee5e5e86d01fdac6b5e28431f28e13143ff03ccfe61af6f1" exitCode=0 Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.289200 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerDied","Data":"f2ec4bfdada8e2932f35d145116d0bf335c4ba0ff8527d320662aec6daa98921"} Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.289234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerDied","Data":"d6fc3e691cab5185ee5e5e86d01fdac6b5e28431f28e13143ff03ccfe61af6f1"} Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.301736 4721 generic.go:334] "Generic (PLEG): container finished" podID="0f119900-0b52-425a-be0a-0940a4747f89" containerID="0d5c49970e3de4e3928ae616ad70c665a088d4d4722ff49627ed715494d6560a" exitCode=0 Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.301843 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.302155 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerDied","Data":"0d5c49970e3de4e3928ae616ad70c665a088d4d4722ff49627ed715494d6560a"} Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.308294 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7588b1fc-b958-40e3-bec2-abc209c1a802" podUID="32729b18-a175-4abd-a8cf-392d318b64d8" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.324348 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.441969 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b323e62-7a54-4935-8e47-2df809ecb2f9" path="/var/lib/kubelet/pods/4b323e62-7a54-4935-8e47-2df809ecb2f9/volumes" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.480725 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.601040 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret\") pod \"7588b1fc-b958-40e3-bec2-abc209c1a802\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.601284 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config\") pod \"7588b1fc-b958-40e3-bec2-abc209c1a802\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.601313 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4zs\" (UniqueName: \"kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs\") pod \"7588b1fc-b958-40e3-bec2-abc209c1a802\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.601385 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle\") pod \"7588b1fc-b958-40e3-bec2-abc209c1a802\" (UID: \"7588b1fc-b958-40e3-bec2-abc209c1a802\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.612586 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7588b1fc-b958-40e3-bec2-abc209c1a802" (UID: "7588b1fc-b958-40e3-bec2-abc209c1a802"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.623315 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7588b1fc-b958-40e3-bec2-abc209c1a802" (UID: "7588b1fc-b958-40e3-bec2-abc209c1a802"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.648764 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs" (OuterVolumeSpecName: "kube-api-access-9j4zs") pod "7588b1fc-b958-40e3-bec2-abc209c1a802" (UID: "7588b1fc-b958-40e3-bec2-abc209c1a802"). InnerVolumeSpecName "kube-api-access-9j4zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.650012 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7588b1fc-b958-40e3-bec2-abc209c1a802" (UID: "7588b1fc-b958-40e3-bec2-abc209c1a802"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.733029 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.733096 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.733113 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7588b1fc-b958-40e3-bec2-abc209c1a802-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.733126 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4zs\" (UniqueName: \"kubernetes.io/projected/7588b1fc-b958-40e3-bec2-abc209c1a802-kube-api-access-9j4zs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.905883 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.941963 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7tv9\" (UniqueName: \"kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9\") pod \"0f119900-0b52-425a-be0a-0940a4747f89\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.942048 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom\") pod \"0f119900-0b52-425a-be0a-0940a4747f89\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.942130 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data\") pod \"0f119900-0b52-425a-be0a-0940a4747f89\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.942214 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs\") pod \"0f119900-0b52-425a-be0a-0940a4747f89\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.942297 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle\") pod \"0f119900-0b52-425a-be0a-0940a4747f89\" (UID: \"0f119900-0b52-425a-be0a-0940a4747f89\") " Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.945408 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs" (OuterVolumeSpecName: "logs") pod "0f119900-0b52-425a-be0a-0940a4747f89" (UID: "0f119900-0b52-425a-be0a-0940a4747f89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.959886 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9" (OuterVolumeSpecName: "kube-api-access-k7tv9") pod "0f119900-0b52-425a-be0a-0940a4747f89" (UID: "0f119900-0b52-425a-be0a-0940a4747f89"). InnerVolumeSpecName "kube-api-access-k7tv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.960635 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f119900-0b52-425a-be0a-0940a4747f89" (UID: "0f119900-0b52-425a-be0a-0940a4747f89"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:42 crc kubenswrapper[4721]: I0202 13:24:42.996364 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.021317 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f119900-0b52-425a-be0a-0940a4747f89" (UID: "0f119900-0b52-425a-be0a-0940a4747f89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.044711 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8v7w\" (UniqueName: \"kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.045129 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.045366 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.045535 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.045651 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.045749 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle\") pod \"5dce367e-6a22-454b-bd02-4a69a739af22\" (UID: \"5dce367e-6a22-454b-bd02-4a69a739af22\") " Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.046580 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7tv9\" (UniqueName: \"kubernetes.io/projected/0f119900-0b52-425a-be0a-0940a4747f89-kube-api-access-k7tv9\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.047142 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.047238 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f119900-0b52-425a-be0a-0940a4747f89-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.047330 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.048029 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.054134 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.067619 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts" (OuterVolumeSpecName: "scripts") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.076712 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w" (OuterVolumeSpecName: "kube-api-access-r8v7w") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "kube-api-access-r8v7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.106228 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data" (OuterVolumeSpecName: "config-data") pod "0f119900-0b52-425a-be0a-0940a4747f89" (UID: "0f119900-0b52-425a-be0a-0940a4747f89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.154605 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8v7w\" (UniqueName: \"kubernetes.io/projected/5dce367e-6a22-454b-bd02-4a69a739af22-kube-api-access-r8v7w\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.154636 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f119900-0b52-425a-be0a-0940a4747f89-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.154646 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.154655 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.154663 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5dce367e-6a22-454b-bd02-4a69a739af22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.211242 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.245543 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data" (OuterVolumeSpecName: "config-data") pod "5dce367e-6a22-454b-bd02-4a69a739af22" (UID: "5dce367e-6a22-454b-bd02-4a69a739af22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.257519 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.257559 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dce367e-6a22-454b-bd02-4a69a739af22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.314999 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"32729b18-a175-4abd-a8cf-392d318b64d8","Type":"ContainerStarted","Data":"095a1845a869c5d12625722bd7e1bd061627e2ac7f6820de74ab59cf641e82b6"} Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.320192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5dce367e-6a22-454b-bd02-4a69a739af22","Type":"ContainerDied","Data":"308b644f1394a2cc3225f510fa06697b0a8ae1e9f8e2aa6c15bae4c005148f01"} Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.320235 4721 scope.go:117] "RemoveContainer" containerID="f2ec4bfdada8e2932f35d145116d0bf335c4ba0ff8527d320662aec6daa98921" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.320231 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.328636 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.329222 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55dd659f54-28qsl" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.329391 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55dd659f54-28qsl" event={"ID":"0f119900-0b52-425a-be0a-0940a4747f89","Type":"ContainerDied","Data":"5366c1b45533360e87afcbc53d2e0646f4fb35a11a7eba0ff655b53f1aa8c34c"} Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.332435 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7588b1fc-b958-40e3-bec2-abc209c1a802" podUID="32729b18-a175-4abd-a8cf-392d318b64d8" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.386151 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.392022 4721 scope.go:117] "RemoveContainer" containerID="d6fc3e691cab5185ee5e5e86d01fdac6b5e28431f28e13143ff03ccfe61af6f1" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.401541 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.440908 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.461406 4721 scope.go:117] "RemoveContainer" containerID="0d5c49970e3de4e3928ae616ad70c665a088d4d4722ff49627ed715494d6560a" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.461540 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:43 crc kubenswrapper[4721]: E0202 13:24:43.462045 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466085 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" Feb 02 13:24:43 crc kubenswrapper[4721]: E0202 13:24:43.466139 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466148 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" Feb 02 13:24:43 crc kubenswrapper[4721]: E0202 13:24:43.466161 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="cinder-scheduler" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466170 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="cinder-scheduler" Feb 02 13:24:43 crc kubenswrapper[4721]: E0202 13:24:43.466189 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="probe" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466195 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="probe" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466542 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="cinder-scheduler" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466563 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466573 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" containerName="probe" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.466592 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f119900-0b52-425a-be0a-0940a4747f89" containerName="barbican-api-log" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.467847 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.471882 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.488672 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55dd659f54-28qsl"] Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.515618 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.566847 4721 scope.go:117] "RemoveContainer" containerID="a307dcbe4bd53270347a396194e4ad6fcdab051a06bc6fbbccd4da1332e2bfc7" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.573802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.573951 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.574038 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvzs\" (UniqueName: \"kubernetes.io/projected/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-kube-api-access-sdvzs\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.574207 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.574286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.574348 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676445 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676532 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676575 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdvzs\" (UniqueName: \"kubernetes.io/projected/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-kube-api-access-sdvzs\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676634 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676653 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676824 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.676866 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.680367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.681964 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.682401 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.682560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.697567 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdvzs\" (UniqueName: \"kubernetes.io/projected/7d63f1df-bbdc-42ee-a234-2d691a3ce7ba-kube-api-access-sdvzs\") pod \"cinder-scheduler-0\" (UID: \"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba\") " pod="openstack/cinder-scheduler-0" Feb 02 13:24:43 crc kubenswrapper[4721]: I0202 13:24:43.829896 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.398909 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.434461 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f119900-0b52-425a-be0a-0940a4747f89" path="/var/lib/kubelet/pods/0f119900-0b52-425a-be0a-0940a4747f89/volumes" Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.435420 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dce367e-6a22-454b-bd02-4a69a739af22" path="/var/lib/kubelet/pods/5dce367e-6a22-454b-bd02-4a69a739af22/volumes" Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.436234 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7588b1fc-b958-40e3-bec2-abc209c1a802" path="/var/lib/kubelet/pods/7588b1fc-b958-40e3-bec2-abc209c1a802/volumes" Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.697222 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.763231 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:24:44 crc kubenswrapper[4721]: I0202 13:24:44.763284 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.097509 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:45 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:45 crc kubenswrapper[4721]: > Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.113251 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.176254 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6ccdcdf5fb-gncnr" Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.295243 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.295733 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-75b75c495b-kpsxz" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-log" containerID="cri-o://be87e0c26b43f57fe8bd6e999c716fddff3bcf7e4eec9b29a39b86ceafba0594" gracePeriod=30 Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.296306 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-75b75c495b-kpsxz" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-api" containerID="cri-o://9340d886174492f6ae0a15f3f3e7b662045af6f27123113fd5b1ff49e71bab73" gracePeriod=30 Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.430479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba","Type":"ContainerStarted","Data":"b970b10b4d8b2db37f91db7ce227dfec2e7533ccd3d9f4a2de977aba52503140"} Feb 02 13:24:45 crc kubenswrapper[4721]: I0202 13:24:45.430514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba","Type":"ContainerStarted","Data":"49d0dd9de3db0f2855ebbfa61b9e326e50f23cf02562ca89635931fe6f270729"} Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.451976 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7d63f1df-bbdc-42ee-a234-2d691a3ce7ba","Type":"ContainerStarted","Data":"c72bbd2a66e905e47f9cac569f2d5aea00e3740eddfea99d3ab627e9a5a914cf"} Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.460033 4721 generic.go:334] "Generic (PLEG): container finished" podID="873ec78b-5777-4560-a744-c4789b43d966" containerID="be87e0c26b43f57fe8bd6e999c716fddff3bcf7e4eec9b29a39b86ceafba0594" exitCode=143 Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.460093 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerDied","Data":"be87e0c26b43f57fe8bd6e999c716fddff3bcf7e4eec9b29a39b86ceafba0594"} Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.473790 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.473766332 podStartE2EDuration="3.473766332s" podCreationTimestamp="2026-02-02 13:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:46.469920828 +0000 UTC m=+1426.772435217" watchObservedRunningTime="2026-02-02 13:24:46.473766332 +0000 UTC m=+1426.776280721" Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.663376 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.733688 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:46 crc kubenswrapper[4721]: I0202 13:24:46.901898 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.831698 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-9b87bd57c-2glsn"] Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.833935 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.841572 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.841846 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.842136 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.844149 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9b87bd57c-2glsn"] Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904214 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-run-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-log-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904335 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-combined-ca-bundle\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904412 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-public-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904460 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-config-data\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904478 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-etc-swift\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904534 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-internal-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:47 crc kubenswrapper[4721]: I0202 13:24:47.904595 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t252c\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-kube-api-access-t252c\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.007105 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-combined-ca-bundle\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.007253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-public-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008210 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-config-data\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008243 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-etc-swift\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008331 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-internal-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008424 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t252c\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-kube-api-access-t252c\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-run-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.008548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-log-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.009017 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-log-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.010177 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-run-httpd\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.013695 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-public-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.014402 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-config-data\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.015225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-etc-swift\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.016591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-combined-ca-bundle\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.016739 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-internal-tls-certs\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.031596 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t252c\" (UniqueName: \"kubernetes.io/projected/c04183e6-a1f0-4d8c-aa00-8dd660336a3b-kube-api-access-t252c\") pod \"swift-proxy-9b87bd57c-2glsn\" (UID: \"c04183e6-a1f0-4d8c-aa00-8dd660336a3b\") " pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.182397 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.496913 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2blk" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" containerID="cri-o://fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de" gracePeriod=2 Feb 02 13:24:48 crc kubenswrapper[4721]: I0202 13:24:48.830263 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.421977 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9b87bd57c-2glsn"] Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.446690 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:49 crc kubenswrapper[4721]: W0202 13:24:49.453151 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04183e6_a1f0_4d8c_aa00_8dd660336a3b.slice/crio-5dc8ef2af1518df2d6f4893a51d2dd25291bfd1ab7d8a637f7ca69a704b0a1ea WatchSource:0}: Error finding container 5dc8ef2af1518df2d6f4893a51d2dd25291bfd1ab7d8a637f7ca69a704b0a1ea: Status 404 returned error can't find the container with id 5dc8ef2af1518df2d6f4893a51d2dd25291bfd1ab7d8a637f7ca69a704b0a1ea Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.554339 4721 generic.go:334] "Generic (PLEG): container finished" podID="873ec78b-5777-4560-a744-c4789b43d966" containerID="9340d886174492f6ae0a15f3f3e7b662045af6f27123113fd5b1ff49e71bab73" exitCode=0 Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.554493 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerDied","Data":"9340d886174492f6ae0a15f3f3e7b662045af6f27123113fd5b1ff49e71bab73"} Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.570943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content\") pod \"cb085bc7-03fe-45d5-8293-754aa8a47e79\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.571138 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqx6q\" (UniqueName: \"kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q\") pod \"cb085bc7-03fe-45d5-8293-754aa8a47e79\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.571218 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities\") pod \"cb085bc7-03fe-45d5-8293-754aa8a47e79\" (UID: \"cb085bc7-03fe-45d5-8293-754aa8a47e79\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.577789 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities" (OuterVolumeSpecName: "utilities") pod "cb085bc7-03fe-45d5-8293-754aa8a47e79" (UID: "cb085bc7-03fe-45d5-8293-754aa8a47e79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.588754 4721 generic.go:334] "Generic (PLEG): container finished" podID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerID="fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de" exitCode=0 Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.588821 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerDied","Data":"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de"} Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.588850 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2blk" event={"ID":"cb085bc7-03fe-45d5-8293-754aa8a47e79","Type":"ContainerDied","Data":"ca306f9ea887ebe52384a1958a49ecea30993901251ae5049666a5cc6680fe83"} Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.588866 4721 scope.go:117] "RemoveContainer" containerID="fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.589039 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2blk" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.592142 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q" (OuterVolumeSpecName: "kube-api-access-dqx6q") pod "cb085bc7-03fe-45d5-8293-754aa8a47e79" (UID: "cb085bc7-03fe-45d5-8293-754aa8a47e79"). InnerVolumeSpecName "kube-api-access-dqx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.598794 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9b87bd57c-2glsn" event={"ID":"c04183e6-a1f0-4d8c-aa00-8dd660336a3b","Type":"ContainerStarted","Data":"5dc8ef2af1518df2d6f4893a51d2dd25291bfd1ab7d8a637f7ca69a704b0a1ea"} Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.623997 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.637982 4721 scope.go:117] "RemoveContainer" containerID="1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.669608 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb085bc7-03fe-45d5-8293-754aa8a47e79" (UID: "cb085bc7-03fe-45d5-8293-754aa8a47e79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.679541 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.681951 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.682031 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.682207 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.683584 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.683645 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbhh\" (UniqueName: \"kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.683742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs\") pod \"873ec78b-5777-4560-a744-c4789b43d966\" (UID: \"873ec78b-5777-4560-a744-c4789b43d966\") " Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.705833 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqx6q\" (UniqueName: \"kubernetes.io/projected/cb085bc7-03fe-45d5-8293-754aa8a47e79-kube-api-access-dqx6q\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.705871 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.705886 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb085bc7-03fe-45d5-8293-754aa8a47e79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.705943 4721 scope.go:117] "RemoveContainer" containerID="5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.706201 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs" (OuterVolumeSpecName: "logs") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.717267 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts" (OuterVolumeSpecName: "scripts") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.717617 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh" (OuterVolumeSpecName: "kube-api-access-ngbhh") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "kube-api-access-ngbhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.786407 4721 scope.go:117] "RemoveContainer" containerID="fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de" Feb 02 13:24:49 crc kubenswrapper[4721]: E0202 13:24:49.787279 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de\": container with ID starting with fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de not found: ID does not exist" containerID="fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.787315 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de"} err="failed to get container status \"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de\": rpc error: code = NotFound desc = could not find container \"fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de\": container with ID starting with fe87ede3ed73966fcb2f3ff06b67149c74278e5cdd6e6a2388c62a80901815de not found: ID does not exist" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.787338 4721 scope.go:117] "RemoveContainer" containerID="1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f" Feb 02 13:24:49 crc kubenswrapper[4721]: E0202 13:24:49.787762 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f\": container with ID starting with 1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f not found: ID does not exist" containerID="1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.787780 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f"} err="failed to get container status \"1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f\": rpc error: code = NotFound desc = could not find container \"1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f\": container with ID starting with 1657f0f86f0baa3fd5b4910f2d2a3f7f01376b4573cd7a450f9ac3d3c0377f0f not found: ID does not exist" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.787794 4721 scope.go:117] "RemoveContainer" containerID="5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396" Feb 02 13:24:49 crc kubenswrapper[4721]: E0202 13:24:49.788021 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396\": container with ID starting with 5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396 not found: ID does not exist" containerID="5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.788039 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396"} err="failed to get container status \"5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396\": rpc error: code = NotFound desc = could not find container \"5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396\": container with ID starting with 5e7d094ebf15b03eae8cab7968f99b33fb74f6a236c1143143fd5cef22da4396 not found: ID does not exist" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.818971 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbhh\" (UniqueName: \"kubernetes.io/projected/873ec78b-5777-4560-a744-c4789b43d966-kube-api-access-ngbhh\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.819016 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/873ec78b-5777-4560-a744-c4789b43d966-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.819034 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.827840 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data" (OuterVolumeSpecName: "config-data") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.852982 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.917610 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.928336 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.932713 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.932748 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.969545 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "873ec78b-5777-4560-a744-c4789b43d966" (UID: "873ec78b-5777-4560-a744-c4789b43d966"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.977351 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:49 crc kubenswrapper[4721]: I0202 13:24:49.992528 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2blk"] Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.035058 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873ec78b-5777-4560-a744-c4789b43d966-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.432972 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" path="/var/lib/kubelet/pods/cb085bc7-03fe-45d5-8293-754aa8a47e79/volumes" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.631306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b75c495b-kpsxz" event={"ID":"873ec78b-5777-4560-a744-c4789b43d966","Type":"ContainerDied","Data":"bef5ef7eca11516e2b9cce9579ac419ba493f62c89b699687290738232336cce"} Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.631363 4721 scope.go:117] "RemoveContainer" containerID="9340d886174492f6ae0a15f3f3e7b662045af6f27123113fd5b1ff49e71bab73" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.631481 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b75c495b-kpsxz" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.653179 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9b87bd57c-2glsn" event={"ID":"c04183e6-a1f0-4d8c-aa00-8dd660336a3b","Type":"ContainerStarted","Data":"e5cda999484cdb96ee93ed2a22b964372c74581db1efaf3b1e3eb994a5a492ce"} Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.653229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9b87bd57c-2glsn" event={"ID":"c04183e6-a1f0-4d8c-aa00-8dd660336a3b","Type":"ContainerStarted","Data":"860409a2d3a132da8f27bfee83b6ab2893aea6a337436216e9ca0ca552f22f90"} Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.653680 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.653726 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.662284 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.696518 4721 scope.go:117] "RemoveContainer" containerID="be87e0c26b43f57fe8bd6e999c716fddff3bcf7e4eec9b29a39b86ceafba0594" Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.702361 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-75b75c495b-kpsxz"] Feb 02 13:24:50 crc kubenswrapper[4721]: I0202 13:24:50.707273 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-9b87bd57c-2glsn" podStartSLOduration=3.707255178 podStartE2EDuration="3.707255178s" podCreationTimestamp="2026-02-02 13:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:24:50.683570466 +0000 UTC m=+1430.986084865" watchObservedRunningTime="2026-02-02 13:24:50.707255178 +0000 UTC m=+1431.009769567" Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.331967 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.332625 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-central-agent" containerID="cri-o://e7a1435ecc8600326126b2f72cf42852a2ff26324fb369f144ca54a8e7a02fe8" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.333199 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" containerID="cri-o://f6ced5c029fbb0ced62ee14010b08872173b1b7d00f61d4cb14f85035de775b1" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.333347 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="sg-core" containerID="cri-o://4d53de923a6215191fa0ab435e823d19598bea6ec82ecee59ac252608725c58c" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.333420 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-notification-agent" containerID="cri-o://0b3cb07d111bcefe4cce6e8cdec0586b864539201a7ae3eef461e7791d1c123a" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.348601 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": EOF" Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.510869 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74cc678f5-fkzpw" Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.609998 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.610277 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7556fd87fb-z78lc" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-api" containerID="cri-o://4badbbeb19bd1ce4dabd3dbcb1949a66c97f449e7bcb0e940791d3782f7337b1" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.610674 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7556fd87fb-z78lc" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-httpd" containerID="cri-o://8f88f40e370779b33eeb9ef5e263779063066216a67b07cb99c3e5b293638a3d" gracePeriod=30 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.746283 4721 generic.go:334] "Generic (PLEG): container finished" podID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerID="f6ced5c029fbb0ced62ee14010b08872173b1b7d00f61d4cb14f85035de775b1" exitCode=0 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.746318 4721 generic.go:334] "Generic (PLEG): container finished" podID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerID="4d53de923a6215191fa0ab435e823d19598bea6ec82ecee59ac252608725c58c" exitCode=2 Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.746384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerDied","Data":"f6ced5c029fbb0ced62ee14010b08872173b1b7d00f61d4cb14f85035de775b1"} Feb 02 13:24:51 crc kubenswrapper[4721]: I0202 13:24:51.746410 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerDied","Data":"4d53de923a6215191fa0ab435e823d19598bea6ec82ecee59ac252608725c58c"} Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.425934 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873ec78b-5777-4560-a744-c4789b43d966" path="/var/lib/kubelet/pods/873ec78b-5777-4560-a744-c4789b43d966/volumes" Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.684483 4721 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1dc1a7fa-c727-45f1-a53a-e9a5bc059fa5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1dc1a7fa_c727_45f1_a53a_e9a5bc059fa5.slice" Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.790956 4721 generic.go:334] "Generic (PLEG): container finished" podID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerID="8f88f40e370779b33eeb9ef5e263779063066216a67b07cb99c3e5b293638a3d" exitCode=0 Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.791020 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerDied","Data":"8f88f40e370779b33eeb9ef5e263779063066216a67b07cb99c3e5b293638a3d"} Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.799331 4721 generic.go:334] "Generic (PLEG): container finished" podID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerID="e7a1435ecc8600326126b2f72cf42852a2ff26324fb369f144ca54a8e7a02fe8" exitCode=0 Feb 02 13:24:52 crc kubenswrapper[4721]: I0202 13:24:52.799372 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerDied","Data":"e7a1435ecc8600326126b2f72cf42852a2ff26324fb369f144ca54a8e7a02fe8"} Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.165978 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.665203 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:24:53 crc kubenswrapper[4721]: E0202 13:24:53.666020 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-api" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666044 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-api" Feb 02 13:24:53 crc kubenswrapper[4721]: E0202 13:24:53.666080 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666089 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" Feb 02 13:24:53 crc kubenswrapper[4721]: E0202 13:24:53.666113 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-log" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666121 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-log" Feb 02 13:24:53 crc kubenswrapper[4721]: E0202 13:24:53.666156 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="extract-utilities" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666164 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="extract-utilities" Feb 02 13:24:53 crc kubenswrapper[4721]: E0202 13:24:53.666194 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="extract-content" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666202 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="extract-content" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666453 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb085bc7-03fe-45d5-8293-754aa8a47e79" containerName="registry-server" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666516 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-log" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.666551 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="873ec78b-5777-4560-a744-c4789b43d966" containerName="placement-api" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.667575 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.674906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.675205 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-ldgvp" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.675363 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.687104 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.772435 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.772634 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.772701 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxc9d\" (UniqueName: \"kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.772721 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.792715 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.794624 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.814833 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.868085 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.870042 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.874451 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxc9d\" (UniqueName: \"kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.874492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.874542 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.874685 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.875146 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.882022 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.895968 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.905814 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.914199 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.917083 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxc9d\" (UniqueName: \"kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d\") pod \"heat-engine-69599c8b5f-rjs76\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.963653 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.966025 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.973640 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.976791 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.976890 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crj69\" (UniqueName: \"kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.976918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.976987 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977008 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977034 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977062 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwmm\" (UniqueName: \"kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977116 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977153 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.977176 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:53 crc kubenswrapper[4721]: I0202 13:24:53.990249 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.005667 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.079219 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080055 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080399 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080425 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwgk\" (UniqueName: \"kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080463 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwmm\" (UniqueName: \"kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080677 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.080852 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.081011 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crj69\" (UniqueName: \"kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.081048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.082043 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.082088 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.082495 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.083185 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.084893 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.088553 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.090823 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.112534 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.116876 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwmm\" (UniqueName: \"kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm\") pod \"heat-cfnapi-59bff7fb85-wq6q5\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.119763 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crj69\" (UniqueName: \"kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69\") pod \"dnsmasq-dns-7756b9d78c-m87tn\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.134661 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.183395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.183442 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cwgk\" (UniqueName: \"kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.183466 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.183520 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.190281 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.190832 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.191875 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.207819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cwgk\" (UniqueName: \"kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk\") pod \"heat-api-8588ddc4dc-rq722\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.292847 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.316176 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.335726 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.844199 4721 generic.go:334] "Generic (PLEG): container finished" podID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerID="0b3cb07d111bcefe4cce6e8cdec0586b864539201a7ae3eef461e7791d1c123a" exitCode=0 Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.844520 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerDied","Data":"0b3cb07d111bcefe4cce6e8cdec0586b864539201a7ae3eef461e7791d1c123a"} Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.846187 4721 generic.go:334] "Generic (PLEG): container finished" podID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerID="4badbbeb19bd1ce4dabd3dbcb1949a66c97f449e7bcb0e940791d3782f7337b1" exitCode=0 Feb 02 13:24:54 crc kubenswrapper[4721]: I0202 13:24:54.846208 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerDied","Data":"4badbbeb19bd1ce4dabd3dbcb1949a66c97f449e7bcb0e940791d3782f7337b1"} Feb 02 13:24:55 crc kubenswrapper[4721]: I0202 13:24:55.084496 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:24:55 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:24:55 crc kubenswrapper[4721]: > Feb 02 13:24:58 crc kubenswrapper[4721]: I0202 13:24:58.192799 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:24:58 crc kubenswrapper[4721]: I0202 13:24:58.216334 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9b87bd57c-2glsn" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.482128 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-596786fd64-rpzql"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.534612 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.540107 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-596786fd64-rpzql"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.569001 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.580487 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.619219 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.620878 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.630286 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745414 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs6zw\" (UniqueName: \"kubernetes.io/projected/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-kube-api-access-xs6zw\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745649 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-combined-ca-bundle\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745678 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745700 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95q9j\" (UniqueName: \"kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745906 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745930 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745953 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data-custom\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.745993 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.746057 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vs22\" (UniqueName: \"kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.746239 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.849555 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.850170 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data-custom\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.850237 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.850322 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vs22\" (UniqueName: \"kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851328 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851536 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851700 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs6zw\" (UniqueName: \"kubernetes.io/projected/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-kube-api-access-xs6zw\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851885 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-combined-ca-bundle\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851941 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.851976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.852395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95q9j\" (UniqueName: \"kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.852645 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.859372 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data-custom\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.862484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.863335 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.863935 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.873056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.874178 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-config-data\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.875288 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.884832 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95q9j\" (UniqueName: \"kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.884848 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-combined-ca-bundle\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.885034 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom\") pod \"heat-api-869db4994-hgxnh\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.893304 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs6zw\" (UniqueName: \"kubernetes.io/projected/6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c-kube-api-access-xs6zw\") pod \"heat-engine-596786fd64-rpzql\" (UID: \"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c\") " pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.901877 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vs22\" (UniqueName: \"kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22\") pod \"heat-cfnapi-6c76b54b86-n9kln\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:00 crc kubenswrapper[4721]: I0202 13:25:00.926668 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:01 crc kubenswrapper[4721]: I0202 13:25:01.005038 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:01 crc kubenswrapper[4721]: I0202 13:25:01.169143 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.573375 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.621149 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.646153 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-86974d69bd-t6gcz"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.651656 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.655797 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.663366 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.669949 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-64c55c4cc7-4htzp"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.671832 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.679310 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.680458 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.692014 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64c55c4cc7-4htzp"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.721151 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86974d69bd-t6gcz"] Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810698 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data-custom\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810739 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-internal-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810770 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-public-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810803 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810822 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-internal-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.810853 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjzhs\" (UniqueName: \"kubernetes.io/projected/5d1412d5-76f7-4132-889d-f706432b3ecc-kube-api-access-qjzhs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.811013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-public-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.811167 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wfhs\" (UniqueName: \"kubernetes.io/projected/5c23b064-e24b-4ab3-886d-d731004b7479-kube-api-access-4wfhs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.811957 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-combined-ca-bundle\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.812211 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.812366 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-combined-ca-bundle\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.812599 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data-custom\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.919129 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-combined-ca-bundle\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.919783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.919830 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-combined-ca-bundle\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.919927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data-custom\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.919975 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data-custom\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920000 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-internal-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920033 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-public-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920146 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-internal-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920217 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjzhs\" (UniqueName: \"kubernetes.io/projected/5d1412d5-76f7-4132-889d-f706432b3ecc-kube-api-access-qjzhs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920299 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-public-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.920351 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wfhs\" (UniqueName: \"kubernetes.io/projected/5c23b064-e24b-4ab3-886d-d731004b7479-kube-api-access-4wfhs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.970115 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wfhs\" (UniqueName: \"kubernetes.io/projected/5c23b064-e24b-4ab3-886d-d731004b7479-kube-api-access-4wfhs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.972658 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjzhs\" (UniqueName: \"kubernetes.io/projected/5d1412d5-76f7-4132-889d-f706432b3ecc-kube-api-access-qjzhs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.973307 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data-custom\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.973980 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-public-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.974573 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-combined-ca-bundle\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.975316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-public-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.975517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-internal-tls-certs\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.977003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-internal-tls-certs\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.977946 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-combined-ca-bundle\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.977953 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1412d5-76f7-4132-889d-f706432b3ecc-config-data\") pod \"heat-api-86974d69bd-t6gcz\" (UID: \"5d1412d5-76f7-4132-889d-f706432b3ecc\") " pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.988486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:02 crc kubenswrapper[4721]: I0202 13:25:02.996845 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c23b064-e24b-4ab3-886d-d731004b7479-config-data-custom\") pod \"heat-cfnapi-64c55c4cc7-4htzp\" (UID: \"5c23b064-e24b-4ab3-886d-d731004b7479\") " pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:03 crc kubenswrapper[4721]: I0202 13:25:03.077044 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:03 crc kubenswrapper[4721]: I0202 13:25:03.109983 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:03 crc kubenswrapper[4721]: I0202 13:25:03.534945 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:25:03 crc kubenswrapper[4721]: I0202 13:25:03.939645 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:03 crc kubenswrapper[4721]: I0202 13:25:03.956605 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.002727 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003165 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003251 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003351 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003372 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.003443 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4mxr\" (UniqueName: \"kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr\") pod \"a1482d2e-b885-44bd-b679-109f0b9698ea\" (UID: \"a1482d2e-b885-44bd-b679-109f0b9698ea\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.005920 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.006679 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.074482 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts" (OuterVolumeSpecName: "scripts") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.092106 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr" (OuterVolumeSpecName: "kube-api-access-c4mxr") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "kube-api-access-c4mxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.112345 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw65p\" (UniqueName: \"kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p\") pod \"6f746721-5da3-4418-8ef6-d0b88f2121bc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.112450 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle\") pod \"6f746721-5da3-4418-8ef6-d0b88f2121bc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.112671 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs\") pod \"6f746721-5da3-4418-8ef6-d0b88f2121bc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.112735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config\") pod \"6f746721-5da3-4418-8ef6-d0b88f2121bc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.112776 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config\") pod \"6f746721-5da3-4418-8ef6-d0b88f2121bc\" (UID: \"6f746721-5da3-4418-8ef6-d0b88f2121bc\") " Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.113359 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.113371 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1482d2e-b885-44bd-b679-109f0b9698ea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.113381 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4mxr\" (UniqueName: \"kubernetes.io/projected/a1482d2e-b885-44bd-b679-109f0b9698ea-kube-api-access-c4mxr\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.113391 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.128373 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p" (OuterVolumeSpecName: "kube-api-access-jw65p") pod "6f746721-5da3-4418-8ef6-d0b88f2121bc" (UID: "6f746721-5da3-4418-8ef6-d0b88f2121bc"). InnerVolumeSpecName "kube-api-access-jw65p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.129286 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6f746721-5da3-4418-8ef6-d0b88f2121bc" (UID: "6f746721-5da3-4418-8ef6-d0b88f2121bc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.206618 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8588ddc4dc-rq722" event={"ID":"ee4f36c4-39c4-4cb4-b24c-676b76966752","Type":"ContainerStarted","Data":"22a62ef40c5e00afd66630e4761effc79fe1d27fa65d0590da25c3c1d43ad9bf"} Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.220883 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw65p\" (UniqueName: \"kubernetes.io/projected/6f746721-5da3-4418-8ef6-d0b88f2121bc-kube-api-access-jw65p\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.220924 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.269035 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f746721-5da3-4418-8ef6-d0b88f2121bc" (UID: "6f746721-5da3-4418-8ef6-d0b88f2121bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.270965 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7556fd87fb-z78lc" event={"ID":"6f746721-5da3-4418-8ef6-d0b88f2121bc","Type":"ContainerDied","Data":"8cc4a5e49bcdc1259392f527ba7a63bedab94aac24105814e1cdaa17c7280e6e"} Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.271296 4721 scope.go:117] "RemoveContainer" containerID="8f88f40e370779b33eeb9ef5e263779063066216a67b07cb99c3e5b293638a3d" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.271495 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7556fd87fb-z78lc" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.284361 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config" (OuterVolumeSpecName: "config") pod "6f746721-5da3-4418-8ef6-d0b88f2121bc" (UID: "6f746721-5da3-4418-8ef6-d0b88f2121bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.303394 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1482d2e-b885-44bd-b679-109f0b9698ea","Type":"ContainerDied","Data":"c455b716179204c8c0342b961a933847d0497bb7d80b63f7c4dd9a07665bceb2"} Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.303537 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.314520 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.331298 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.331337 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.331355 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.355294 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6f746721-5da3-4418-8ef6-d0b88f2121bc" (UID: "6f746721-5da3-4418-8ef6-d0b88f2121bc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.409464 4721 scope.go:117] "RemoveContainer" containerID="4badbbeb19bd1ce4dabd3dbcb1949a66c97f449e7bcb0e940791d3782f7337b1" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.437039 4721 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f746721-5da3-4418-8ef6-d0b88f2121bc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.519647 4721 scope.go:117] "RemoveContainer" containerID="f6ced5c029fbb0ced62ee14010b08872173b1b7d00f61d4cb14f85035de775b1" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.636753 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.649784 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data" (OuterVolumeSpecName: "config-data") pod "a1482d2e-b885-44bd-b679-109f0b9698ea" (UID: "a1482d2e-b885-44bd-b679-109f0b9698ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.679582 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.679831 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1482d2e-b885-44bd-b679-109f0b9698ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.724847 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.790283 4721 scope.go:117] "RemoveContainer" containerID="4d53de923a6215191fa0ab435e823d19598bea6ec82ecee59ac252608725c58c" Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.798839 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.816205 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7556fd87fb-z78lc"] Feb 02 13:25:04 crc kubenswrapper[4721]: I0202 13:25:04.837190 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:04.999049 4721 scope.go:117] "RemoveContainer" containerID="0b3cb07d111bcefe4cce6e8cdec0586b864539201a7ae3eef461e7791d1c123a" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.008380 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.031229 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.059436 4721 scope.go:117] "RemoveContainer" containerID="e7a1435ecc8600326126b2f72cf42852a2ff26324fb369f144ca54a8e7a02fe8" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.063104 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064016 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-notification-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064035 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-notification-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064050 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="sg-core" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064055 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="sg-core" Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064081 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-central-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064087 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-central-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064116 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-api" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064122 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-api" Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064141 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064148 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: E0202 13:25:05.064161 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064167 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064383 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-central-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064401 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="proxy-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064411 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-httpd" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064421 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="ceilometer-notification-agent" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064439 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" containerName="neutron-api" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.064450 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" containerName="sg-core" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.067022 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.070844 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.071116 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.078253 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.206962 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207133 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207322 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207357 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7n46\" (UniqueName: \"kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207484 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207807 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.207890 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.266584 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:25:05 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:25:05 crc kubenswrapper[4721]: > Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310244 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310362 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310519 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7n46\" (UniqueName: \"kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310627 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310776 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.310869 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.311013 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.311785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.312491 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.315504 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.322706 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.327280 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.344658 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.357589 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7n46\" (UniqueName: \"kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46\") pod \"ceilometer-0\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.372234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" event={"ID":"c5f7cb67-4d7c-4bc8-bf45-c949450206f0","Type":"ContainerStarted","Data":"622fe9b6176f587f10b3c6f779b6fa54763b7ccedb8408baf73c74843450fabf"} Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.395668 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69599c8b5f-rjs76" event={"ID":"ce072a84-75da-4060-9c4a-d029b3a14947","Type":"ContainerStarted","Data":"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062"} Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.395711 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69599c8b5f-rjs76" event={"ID":"ce072a84-75da-4060-9c4a-d029b3a14947","Type":"ContainerStarted","Data":"c5aaadf4a39fbac2caf4ce2bae03ec472daa951b771c303d721f863e7147d5f2"} Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.396319 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.417662 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.429776 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"32729b18-a175-4abd-a8cf-392d318b64d8","Type":"ContainerStarted","Data":"3f17b6addea34db921e5e823921c2614b964aafd11d79b52c8d69de9bcab3c0d"} Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.489301 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:25:05 crc kubenswrapper[4721]: W0202 13:25:05.541679 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbddd12fa_0653_4199_867f_bfdf51350b39.slice/crio-0f9d2ecaa1c8c841d9801b80ea33e35c8c2a2c815bca71770000a6385ed7be15 WatchSource:0}: Error finding container 0f9d2ecaa1c8c841d9801b80ea33e35c8c2a2c815bca71770000a6385ed7be15: Status 404 returned error can't find the container with id 0f9d2ecaa1c8c841d9801b80ea33e35c8c2a2c815bca71770000a6385ed7be15 Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.588395 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.606439 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86974d69bd-t6gcz"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.612384 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-69599c8b5f-rjs76" podStartSLOduration=12.612366269 podStartE2EDuration="12.612366269s" podCreationTimestamp="2026-02-02 13:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:05.442586765 +0000 UTC m=+1445.745101154" watchObservedRunningTime="2026-02-02 13:25:05.612366269 +0000 UTC m=+1445.914880658" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.639972 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.650272 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.191018503 podStartE2EDuration="24.650246847s" podCreationTimestamp="2026-02-02 13:24:41 +0000 UTC" firstStartedPulling="2026-02-02 13:24:42.355710807 +0000 UTC m=+1422.658225186" lastFinishedPulling="2026-02-02 13:25:03.814939141 +0000 UTC m=+1444.117453530" observedRunningTime="2026-02-02 13:25:05.483979217 +0000 UTC m=+1445.786493626" watchObservedRunningTime="2026-02-02 13:25:05.650246847 +0000 UTC m=+1445.952761246" Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.665697 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-596786fd64-rpzql"] Feb 02 13:25:05 crc kubenswrapper[4721]: I0202 13:25:05.709770 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64c55c4cc7-4htzp"] Feb 02 13:25:05 crc kubenswrapper[4721]: W0202 13:25:05.792749 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c23b064_e24b_4ab3_886d_d731004b7479.slice/crio-9ad18306781c46f28b10283fd0aff6494f3502186f08023b078948b6cce763c5 WatchSource:0}: Error finding container 9ad18306781c46f28b10283fd0aff6494f3502186f08023b078948b6cce763c5: Status 404 returned error can't find the container with id 9ad18306781c46f28b10283fd0aff6494f3502186f08023b078948b6cce763c5 Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.035212 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142512 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142544 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142576 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142616 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142730 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142841 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8k67\" (UniqueName: \"kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.142986 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs\") pod \"a6048763-9be8-4530-b02a-78022c20d668\" (UID: \"a6048763-9be8-4530-b02a-78022c20d668\") " Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.145093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.146058 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs" (OuterVolumeSpecName: "logs") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.188141 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67" (OuterVolumeSpecName: "kube-api-access-m8k67") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "kube-api-access-m8k67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.197206 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts" (OuterVolumeSpecName: "scripts") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.234394 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.259248 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8k67\" (UniqueName: \"kubernetes.io/projected/a6048763-9be8-4530-b02a-78022c20d668-kube-api-access-m8k67\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.259310 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6048763-9be8-4530-b02a-78022c20d668-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.259325 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.259350 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6048763-9be8-4530-b02a-78022c20d668-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.259375 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.332940 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.445686 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.490523 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f746721-5da3-4418-8ef6-d0b88f2121bc" path="/var/lib/kubelet/pods/6f746721-5da3-4418-8ef6-d0b88f2121bc/volumes" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.492194 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1482d2e-b885-44bd-b679-109f0b9698ea" path="/var/lib/kubelet/pods/a1482d2e-b885-44bd-b679-109f0b9698ea/volumes" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.493791 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.511506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data" (OuterVolumeSpecName: "config-data") pod "a6048763-9be8-4530-b02a-78022c20d668" (UID: "a6048763-9be8-4530-b02a-78022c20d668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.521359 4721 generic.go:334] "Generic (PLEG): container finished" podID="a6048763-9be8-4530-b02a-78022c20d668" containerID="014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0" exitCode=137 Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.521755 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.596830 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6048763-9be8-4530-b02a-78022c20d668-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.611179 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-596786fd64-rpzql" podStartSLOduration=6.611109816 podStartE2EDuration="6.611109816s" podCreationTimestamp="2026-02-02 13:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:06.549428243 +0000 UTC m=+1446.851942632" watchObservedRunningTime="2026-02-02 13:25:06.611109816 +0000 UTC m=+1446.913624235" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687319 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-869db4994-hgxnh" event={"ID":"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8","Type":"ContainerStarted","Data":"b49739d0b82310eb7a88b3289808bff2fac1bf21a0b025f6531793460d8748fd"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687380 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687393 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" event={"ID":"5c23b064-e24b-4ab3-886d-d731004b7479","Type":"ContainerStarted","Data":"9ad18306781c46f28b10283fd0aff6494f3502186f08023b078948b6cce763c5"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687413 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" event={"ID":"a217ca40-3638-474b-b739-cb8784823fa6","Type":"ContainerStarted","Data":"855aab218dced6a9a20cd36dee1f3e920c647b6da54cc409503c35b4f9458f8e"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687426 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" event={"ID":"bddd12fa-0653-4199-867f-bfdf51350b39","Type":"ContainerStarted","Data":"0f9d2ecaa1c8c841d9801b80ea33e35c8c2a2c815bca71770000a6385ed7be15"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687438 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerDied","Data":"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687454 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a6048763-9be8-4530-b02a-78022c20d668","Type":"ContainerDied","Data":"722b50e74b8986a87887949a03c96fc3de9ed0d41b61df9c48e69c902703c27b"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687463 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-596786fd64-rpzql" event={"ID":"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c","Type":"ContainerStarted","Data":"aa5741d6d05ef4756b3d79fd25470a761119d83f3c7124c8d4e040ceaf9fb0b0"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687471 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-596786fd64-rpzql" event={"ID":"6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c","Type":"ContainerStarted","Data":"c28cbd79ea67fd089d525cb64965c78c717693cbb211141dd558ccdd5b91d1f8"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86974d69bd-t6gcz" event={"ID":"5d1412d5-76f7-4132-889d-f706432b3ecc","Type":"ContainerStarted","Data":"7896797b149f41786af0097adf6f640b7ee27ccb8c123187623a348336549aa9"} Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.687496 4721 scope.go:117] "RemoveContainer" containerID="014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.740919 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.754056 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.772279 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:25:06 crc kubenswrapper[4721]: E0202 13:25:06.772857 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.772882 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api" Feb 02 13:25:06 crc kubenswrapper[4721]: E0202 13:25:06.772913 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api-log" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.772924 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api-log" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.789917 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api-log" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.789980 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6048763-9be8-4530-b02a-78022c20d668" containerName="cinder-api" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.800639 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.800777 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.804337 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.804577 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.807501 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909570 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkzn\" (UniqueName: \"kubernetes.io/projected/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-kube-api-access-djkzn\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909700 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909844 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909892 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.909966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-logs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.910272 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-scripts\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:06 crc kubenswrapper[4721]: I0202 13:25:06.910379 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.012848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.012913 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.012949 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.012991 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.013019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.013079 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-logs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.013150 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.014055 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-scripts\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.014166 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.014377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djkzn\" (UniqueName: \"kubernetes.io/projected/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-kube-api-access-djkzn\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.014575 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-logs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.020898 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-scripts\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.021608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.025549 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.035853 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data-custom\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.036954 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.059833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djkzn\" (UniqueName: \"kubernetes.io/projected/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-kube-api-access-djkzn\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.063057 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eabfa0b-0304-4eda-8f8a-dc9160569e4b-config-data\") pod \"cinder-api-0\" (UID: \"0eabfa0b-0304-4eda-8f8a-dc9160569e4b\") " pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.135945 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.552866 4721 generic.go:334] "Generic (PLEG): container finished" podID="a217ca40-3638-474b-b739-cb8784823fa6" containerID="bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed" exitCode=0 Feb 02 13:25:07 crc kubenswrapper[4721]: I0202 13:25:07.554709 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" event={"ID":"a217ca40-3638-474b-b739-cb8784823fa6","Type":"ContainerDied","Data":"bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed"} Feb 02 13:25:07 crc kubenswrapper[4721]: W0202 13:25:07.815957 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod679713b8_7e9b_4ccc_87f3_85afd17dc008.slice/crio-6f08f3fdbee7bda0664d6d60e3f3f3c62d78c95caee038e2d39526bac0095990 WatchSource:0}: Error finding container 6f08f3fdbee7bda0664d6d60e3f3f3c62d78c95caee038e2d39526bac0095990: Status 404 returned error can't find the container with id 6f08f3fdbee7bda0664d6d60e3f3f3c62d78c95caee038e2d39526bac0095990 Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.427310 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6048763-9be8-4530-b02a-78022c20d668" path="/var/lib/kubelet/pods/a6048763-9be8-4530-b02a-78022c20d668/volumes" Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.497958 4721 scope.go:117] "RemoveContainer" containerID="e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681" Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.592288 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerStarted","Data":"6f08f3fdbee7bda0664d6d60e3f3f3c62d78c95caee038e2d39526bac0095990"} Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.615890 4721 scope.go:117] "RemoveContainer" containerID="014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0" Feb 02 13:25:08 crc kubenswrapper[4721]: E0202 13:25:08.617229 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0\": container with ID starting with 014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0 not found: ID does not exist" containerID="014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0" Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.617275 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0"} err="failed to get container status \"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0\": rpc error: code = NotFound desc = could not find container \"014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0\": container with ID starting with 014d914a5ce67334b42e317ebe6baa31bca4aa273d11d84a4123f675577064a0 not found: ID does not exist" Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.617303 4721 scope.go:117] "RemoveContainer" containerID="e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681" Feb 02 13:25:08 crc kubenswrapper[4721]: E0202 13:25:08.617631 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681\": container with ID starting with e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681 not found: ID does not exist" containerID="e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681" Feb 02 13:25:08 crc kubenswrapper[4721]: I0202 13:25:08.617655 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681"} err="failed to get container status \"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681\": rpc error: code = NotFound desc = could not find container \"e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681\": container with ID starting with e2b0754f177aa2fb457d07e7ccb63cf3d2db89f9bc5c8b27f274b7d5e64b1681 not found: ID does not exist" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.133406 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.616505 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" event={"ID":"5c23b064-e24b-4ab3-886d-d731004b7479","Type":"ContainerStarted","Data":"53fda4e0de936c98aab52cbdc7d3f337c88ae4716d61fd71055f35f57f0a8272"} Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.616874 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.621868 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" event={"ID":"a217ca40-3638-474b-b739-cb8784823fa6","Type":"ContainerStarted","Data":"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088"} Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.622779 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.626404 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" event={"ID":"bddd12fa-0653-4199-867f-bfdf51350b39","Type":"ContainerStarted","Data":"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75"} Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.626539 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" podUID="bddd12fa-0653-4199-867f-bfdf51350b39" containerName="heat-cfnapi" containerID="cri-o://c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75" gracePeriod=60 Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.626553 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.652049 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" podStartSLOduration=4.74915499 podStartE2EDuration="7.652034129s" podCreationTimestamp="2026-02-02 13:25:02 +0000 UTC" firstStartedPulling="2026-02-02 13:25:05.801026146 +0000 UTC m=+1446.103540535" lastFinishedPulling="2026-02-02 13:25:08.703905285 +0000 UTC m=+1449.006419674" observedRunningTime="2026-02-02 13:25:09.64990136 +0000 UTC m=+1449.952415769" watchObservedRunningTime="2026-02-02 13:25:09.652034129 +0000 UTC m=+1449.954548518" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.652162 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.690472 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" event={"ID":"c5f7cb67-4d7c-4bc8-bf45-c949450206f0","Type":"ContainerStarted","Data":"cacfe9dad3c5ac1611cb10a831ba529e6821a0590168b211ab46225955ae992f"} Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.692140 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.711154 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" podStartSLOduration=13.614440355 podStartE2EDuration="16.711131631s" podCreationTimestamp="2026-02-02 13:24:53 +0000 UTC" firstStartedPulling="2026-02-02 13:25:05.597898456 +0000 UTC m=+1445.900412845" lastFinishedPulling="2026-02-02 13:25:08.694589732 +0000 UTC m=+1448.997104121" observedRunningTime="2026-02-02 13:25:09.69118353 +0000 UTC m=+1449.993697919" watchObservedRunningTime="2026-02-02 13:25:09.711131631 +0000 UTC m=+1450.013646040" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.723081 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0eabfa0b-0304-4eda-8f8a-dc9160569e4b","Type":"ContainerStarted","Data":"6d51b976af288be98113ba87064c5de10cd72146e96a3af7a510329a6991fa32"} Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.778604 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" podStartSLOduration=16.77858249 podStartE2EDuration="16.77858249s" podCreationTimestamp="2026-02-02 13:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:09.723388174 +0000 UTC m=+1450.025902563" watchObservedRunningTime="2026-02-02 13:25:09.77858249 +0000 UTC m=+1450.081096889" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.842855 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" podStartSLOduration=5.986481586 podStartE2EDuration="9.842831663s" podCreationTimestamp="2026-02-02 13:25:00 +0000 UTC" firstStartedPulling="2026-02-02 13:25:04.79051428 +0000 UTC m=+1445.093028669" lastFinishedPulling="2026-02-02 13:25:08.646864357 +0000 UTC m=+1448.949378746" observedRunningTime="2026-02-02 13:25:09.766100192 +0000 UTC m=+1450.068614581" watchObservedRunningTime="2026-02-02 13:25:09.842831663 +0000 UTC m=+1450.145346052" Feb 02 13:25:09 crc kubenswrapper[4721]: I0202 13:25:09.846105 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-869db4994-hgxnh" podStartSLOduration=6.657639128 podStartE2EDuration="9.846094092s" podCreationTimestamp="2026-02-02 13:25:00 +0000 UTC" firstStartedPulling="2026-02-02 13:25:05.513522838 +0000 UTC m=+1445.816037227" lastFinishedPulling="2026-02-02 13:25:08.701977802 +0000 UTC m=+1449.004492191" observedRunningTime="2026-02-02 13:25:09.794513313 +0000 UTC m=+1450.097027692" watchObservedRunningTime="2026-02-02 13:25:09.846094092 +0000 UTC m=+1450.148608501" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.739649 4721 generic.go:334] "Generic (PLEG): container finished" podID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerID="d6a7a5707b26d69fb1863e451668fcdd388419479b4005bb1d55054ada2fb366" exitCode=1 Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.739704 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-869db4994-hgxnh" event={"ID":"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8","Type":"ContainerDied","Data":"d6a7a5707b26d69fb1863e451668fcdd388419479b4005bb1d55054ada2fb366"} Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.740779 4721 scope.go:117] "RemoveContainer" containerID="d6a7a5707b26d69fb1863e451668fcdd388419479b4005bb1d55054ada2fb366" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.745704 4721 generic.go:334] "Generic (PLEG): container finished" podID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerID="cacfe9dad3c5ac1611cb10a831ba529e6821a0590168b211ab46225955ae992f" exitCode=1 Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.745779 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" event={"ID":"c5f7cb67-4d7c-4bc8-bf45-c949450206f0","Type":"ContainerDied","Data":"cacfe9dad3c5ac1611cb10a831ba529e6821a0590168b211ab46225955ae992f"} Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.746672 4721 scope.go:117] "RemoveContainer" containerID="cacfe9dad3c5ac1611cb10a831ba529e6821a0590168b211ab46225955ae992f" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.756623 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8588ddc4dc-rq722" event={"ID":"ee4f36c4-39c4-4cb4-b24c-676b76966752","Type":"ContainerStarted","Data":"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2"} Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.756803 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-8588ddc4dc-rq722" podUID="ee4f36c4-39c4-4cb4-b24c-676b76966752" containerName="heat-api" containerID="cri-o://fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2" gracePeriod=60 Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.757126 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.790080 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerStarted","Data":"22439dbfbc2d5b9a7aaa38b029bb7c102abafe93d1e9397f189c54e9d9a93d8a"} Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.831162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86974d69bd-t6gcz" event={"ID":"5d1412d5-76f7-4132-889d-f706432b3ecc","Type":"ContainerStarted","Data":"525d4f77f841cd0b1218431404c5fc5cc758ca5f6ff2e06ab4757a3b9c618d22"} Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.832128 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.833381 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-8588ddc4dc-rq722" podStartSLOduration=12.869659607 podStartE2EDuration="17.833361098s" podCreationTimestamp="2026-02-02 13:24:53 +0000 UTC" firstStartedPulling="2026-02-02 13:25:03.675973972 +0000 UTC m=+1443.978488361" lastFinishedPulling="2026-02-02 13:25:08.639675463 +0000 UTC m=+1448.942189852" observedRunningTime="2026-02-02 13:25:10.814392213 +0000 UTC m=+1451.116906632" watchObservedRunningTime="2026-02-02 13:25:10.833361098 +0000 UTC m=+1451.135875487" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.869502 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-86974d69bd-t6gcz" podStartSLOduration=5.7635264809999995 podStartE2EDuration="8.869478667s" podCreationTimestamp="2026-02-02 13:25:02 +0000 UTC" firstStartedPulling="2026-02-02 13:25:05.542312589 +0000 UTC m=+1445.844826978" lastFinishedPulling="2026-02-02 13:25:08.648264775 +0000 UTC m=+1448.950779164" observedRunningTime="2026-02-02 13:25:10.857324868 +0000 UTC m=+1451.159839257" watchObservedRunningTime="2026-02-02 13:25:10.869478667 +0000 UTC m=+1451.171993046" Feb 02 13:25:10 crc kubenswrapper[4721]: I0202 13:25:10.927180 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.008190 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.800200 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.841926 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerStarted","Data":"f207fef2f76230ae29ed92563bfe6192bf4e5751fb81be7d8bc1400d4d42e087"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.845357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-869db4994-hgxnh" event={"ID":"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8","Type":"ContainerStarted","Data":"9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.845485 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.849256 4721 generic.go:334] "Generic (PLEG): container finished" podID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" exitCode=1 Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.849332 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" event={"ID":"c5f7cb67-4d7c-4bc8-bf45-c949450206f0","Type":"ContainerDied","Data":"fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.849372 4721 scope.go:117] "RemoveContainer" containerID="cacfe9dad3c5ac1611cb10a831ba529e6821a0590168b211ab46225955ae992f" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.850199 4721 scope.go:117] "RemoveContainer" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" Feb 02 13:25:11 crc kubenswrapper[4721]: E0202 13:25:11.850507 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6c76b54b86-n9kln_openstack(c5f7cb67-4d7c-4bc8-bf45-c949450206f0)\"" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.856826 4721 generic.go:334] "Generic (PLEG): container finished" podID="ee4f36c4-39c4-4cb4-b24c-676b76966752" containerID="fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2" exitCode=0 Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.856914 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8588ddc4dc-rq722" event={"ID":"ee4f36c4-39c4-4cb4-b24c-676b76966752","Type":"ContainerDied","Data":"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.857259 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8588ddc4dc-rq722" event={"ID":"ee4f36c4-39c4-4cb4-b24c-676b76966752","Type":"ContainerDied","Data":"22a62ef40c5e00afd66630e4761effc79fe1d27fa65d0590da25c3c1d43ad9bf"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.856933 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8588ddc4dc-rq722" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.857774 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cwgk\" (UniqueName: \"kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk\") pod \"ee4f36c4-39c4-4cb4-b24c-676b76966752\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.857914 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data\") pod \"ee4f36c4-39c4-4cb4-b24c-676b76966752\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.858097 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom\") pod \"ee4f36c4-39c4-4cb4-b24c-676b76966752\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.858181 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle\") pod \"ee4f36c4-39c4-4cb4-b24c-676b76966752\" (UID: \"ee4f36c4-39c4-4cb4-b24c-676b76966752\") " Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.870760 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0eabfa0b-0304-4eda-8f8a-dc9160569e4b","Type":"ContainerStarted","Data":"fe66034a92edf80281d3c1b9d4b4c7c688f32d48b13fd5da1326e2459fea155d"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.870803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0eabfa0b-0304-4eda-8f8a-dc9160569e4b","Type":"ContainerStarted","Data":"bba10c67506fd0369bb5fd8b56802fa27fb210d2c301daa9dcdf6f4098bdaff1"} Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.871826 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.923056 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk" (OuterVolumeSpecName: "kube-api-access-9cwgk") pod "ee4f36c4-39c4-4cb4-b24c-676b76966752" (UID: "ee4f36c4-39c4-4cb4-b24c-676b76966752"). InnerVolumeSpecName "kube-api-access-9cwgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.926363 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee4f36c4-39c4-4cb4-b24c-676b76966752" (UID: "ee4f36c4-39c4-4cb4-b24c-676b76966752"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.964856 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cwgk\" (UniqueName: \"kubernetes.io/projected/ee4f36c4-39c4-4cb4-b24c-676b76966752-kube-api-access-9cwgk\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.964901 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:11 crc kubenswrapper[4721]: I0202 13:25:11.983433 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data" (OuterVolumeSpecName: "config-data") pod "ee4f36c4-39c4-4cb4-b24c-676b76966752" (UID: "ee4f36c4-39c4-4cb4-b24c-676b76966752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.022854 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.022837887 podStartE2EDuration="6.022837887s" podCreationTimestamp="2026-02-02 13:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:12.018307565 +0000 UTC m=+1452.320821964" watchObservedRunningTime="2026-02-02 13:25:12.022837887 +0000 UTC m=+1452.325352266" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.060812 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4f36c4-39c4-4cb4-b24c-676b76966752" (UID: "ee4f36c4-39c4-4cb4-b24c-676b76966752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.066999 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.067040 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4f36c4-39c4-4cb4-b24c-676b76966752-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.252839 4721 scope.go:117] "RemoveContainer" containerID="fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.256341 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.267307 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8588ddc4dc-rq722"] Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.299892 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.320673 4721 scope.go:117] "RemoveContainer" containerID="fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2" Feb 02 13:25:12 crc kubenswrapper[4721]: E0202 13:25:12.321227 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2\": container with ID starting with fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2 not found: ID does not exist" containerID="fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.321270 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2"} err="failed to get container status \"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2\": rpc error: code = NotFound desc = could not find container \"fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2\": container with ID starting with fa800e1436f00d168fdcf5bb1372a6b44ff3f3a66add3f54482b3057f2c14bd2 not found: ID does not exist" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.425267 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4f36c4-39c4-4cb4-b24c-676b76966752" path="/var/lib/kubelet/pods/ee4f36c4-39c4-4cb4-b24c-676b76966752/volumes" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.893864 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerStarted","Data":"3ad385a2be5227562f34db7d77851850b136530db655dfc87e87234604886068"} Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.896663 4721 generic.go:334] "Generic (PLEG): container finished" podID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerID="9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a" exitCode=1 Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.896753 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-869db4994-hgxnh" event={"ID":"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8","Type":"ContainerDied","Data":"9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a"} Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.896793 4721 scope.go:117] "RemoveContainer" containerID="d6a7a5707b26d69fb1863e451668fcdd388419479b4005bb1d55054ada2fb366" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.897423 4721 scope.go:117] "RemoveContainer" containerID="9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a" Feb 02 13:25:12 crc kubenswrapper[4721]: E0202 13:25:12.897788 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-869db4994-hgxnh_openstack(2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8)\"" pod="openstack/heat-api-869db4994-hgxnh" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" Feb 02 13:25:12 crc kubenswrapper[4721]: I0202 13:25:12.906816 4721 scope.go:117] "RemoveContainer" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" Feb 02 13:25:12 crc kubenswrapper[4721]: E0202 13:25:12.907104 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6c76b54b86-n9kln_openstack(c5f7cb67-4d7c-4bc8-bf45-c949450206f0)\"" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" Feb 02 13:25:13 crc kubenswrapper[4721]: I0202 13:25:13.815111 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:13 crc kubenswrapper[4721]: I0202 13:25:13.824162 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-httpd" containerID="cri-o://e352ddf5546d205ff44a3674807cce6288e4d9d2631c1e97a5b81f79535b48d6" gracePeriod=30 Feb 02 13:25:13 crc kubenswrapper[4721]: I0202 13:25:13.824554 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-log" containerID="cri-o://60d5f5c74197287dc48184ed76b8f6d34b4d23a85669a6b5c075668ede91ce31" gracePeriod=30 Feb 02 13:25:13 crc kubenswrapper[4721]: I0202 13:25:13.918821 4721 scope.go:117] "RemoveContainer" containerID="9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a" Feb 02 13:25:13 crc kubenswrapper[4721]: E0202 13:25:13.920047 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-869db4994-hgxnh_openstack(2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8)\"" pod="openstack/heat-api-869db4994-hgxnh" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.137108 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.228652 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.228942 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="dnsmasq-dns" containerID="cri-o://79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3" gracePeriod=10 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.764841 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.765200 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.880371 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.939343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerStarted","Data":"ab6eff23a607854558fcb2aa7cbae46f97306ff156928f4f08c74867721f4d2c"} Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.939558 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-central-agent" containerID="cri-o://22439dbfbc2d5b9a7aaa38b029bb7c102abafe93d1e9397f189c54e9d9a93d8a" gracePeriod=30 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.939853 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.940287 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="proxy-httpd" containerID="cri-o://ab6eff23a607854558fcb2aa7cbae46f97306ff156928f4f08c74867721f4d2c" gracePeriod=30 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.940354 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="sg-core" containerID="cri-o://3ad385a2be5227562f34db7d77851850b136530db655dfc87e87234604886068" gracePeriod=30 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.940397 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-notification-agent" containerID="cri-o://f207fef2f76230ae29ed92563bfe6192bf4e5751fb81be7d8bc1400d4d42e087" gracePeriod=30 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.946768 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.947360 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.947454 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.947678 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.947696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kpkt\" (UniqueName: \"kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.948436 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc\") pod \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\" (UID: \"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f\") " Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.967664 4721 generic.go:334] "Generic (PLEG): container finished" podID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerID="60d5f5c74197287dc48184ed76b8f6d34b4d23a85669a6b5c075668ede91ce31" exitCode=143 Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.967771 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerDied","Data":"60d5f5c74197287dc48184ed76b8f6d34b4d23a85669a6b5c075668ede91ce31"} Feb 02 13:25:14 crc kubenswrapper[4721]: I0202 13:25:14.976773 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.398073452 podStartE2EDuration="10.976744901s" podCreationTimestamp="2026-02-02 13:25:04 +0000 UTC" firstStartedPulling="2026-02-02 13:25:08.391728218 +0000 UTC m=+1448.694242607" lastFinishedPulling="2026-02-02 13:25:13.970399667 +0000 UTC m=+1454.272914056" observedRunningTime="2026-02-02 13:25:14.970878722 +0000 UTC m=+1455.273393121" watchObservedRunningTime="2026-02-02 13:25:14.976744901 +0000 UTC m=+1455.279259290" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.001955 4721 generic.go:334] "Generic (PLEG): container finished" podID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerID="79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3" exitCode=0 Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.001986 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.002639 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" event={"ID":"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f","Type":"ContainerDied","Data":"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3"} Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.002771 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" event={"ID":"f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f","Type":"ContainerDied","Data":"0e5ec1ece5d46afcb7f1cd43125f924d0c21a100ba9e55bfc9c2436d4917aef7"} Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.002846 4721 scope.go:117] "RemoveContainer" containerID="79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.003316 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt" (OuterVolumeSpecName: "kube-api-access-5kpkt") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "kube-api-access-5kpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.046618 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.064824 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kpkt\" (UniqueName: \"kubernetes.io/projected/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-kube-api-access-5kpkt\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.064863 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.079502 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.086200 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:25:15 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:25:15 crc kubenswrapper[4721]: > Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.115684 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.121595 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.138714 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config" (OuterVolumeSpecName: "config") pod "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" (UID: "f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.166540 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.166578 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.166587 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.166595 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.279776 4721 scope.go:117] "RemoveContainer" containerID="4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.304309 4721 scope.go:117] "RemoveContainer" containerID="79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3" Feb 02 13:25:15 crc kubenswrapper[4721]: E0202 13:25:15.304780 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3\": container with ID starting with 79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3 not found: ID does not exist" containerID="79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.304827 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3"} err="failed to get container status \"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3\": rpc error: code = NotFound desc = could not find container \"79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3\": container with ID starting with 79b4d95f7f9db98cc9b0a4ae6c82ff501b94e112bfc5f0a0bf15dd2adbe2a2a3 not found: ID does not exist" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.304847 4721 scope.go:117] "RemoveContainer" containerID="4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505" Feb 02 13:25:15 crc kubenswrapper[4721]: E0202 13:25:15.305241 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505\": container with ID starting with 4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505 not found: ID does not exist" containerID="4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.305291 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505"} err="failed to get container status \"4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505\": rpc error: code = NotFound desc = could not find container \"4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505\": container with ID starting with 4cdfecffc79f466040cbdb103a7f0aa0881b7f82c12764f848d66d7d7434f505 not found: ID does not exist" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.359295 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.379430 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-9czmj"] Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.417026 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.417273 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-log" containerID="cri-o://6b2f4ace0eb75cc9899716f3900092d7c461a3838ae11f1c1f66ca5572586d0f" gracePeriod=30 Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.417372 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-httpd" containerID="cri-o://8d37382e602df793413e2180a90609111293c970770dc8adb79b0513cd945df9" gracePeriod=30 Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.927460 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:15 crc kubenswrapper[4721]: I0202 13:25:15.928882 4721 scope.go:117] "RemoveContainer" containerID="9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a" Feb 02 13:25:15 crc kubenswrapper[4721]: E0202 13:25:15.929224 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-869db4994-hgxnh_openstack(2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8)\"" pod="openstack/heat-api-869db4994-hgxnh" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.005377 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.006651 4721 scope.go:117] "RemoveContainer" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" Feb 02 13:25:16 crc kubenswrapper[4721]: E0202 13:25:16.006948 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6c76b54b86-n9kln_openstack(c5f7cb67-4d7c-4bc8-bf45-c949450206f0)\"" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.010298 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043359 4721 generic.go:334] "Generic (PLEG): container finished" podID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerID="ab6eff23a607854558fcb2aa7cbae46f97306ff156928f4f08c74867721f4d2c" exitCode=0 Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043393 4721 generic.go:334] "Generic (PLEG): container finished" podID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerID="3ad385a2be5227562f34db7d77851850b136530db655dfc87e87234604886068" exitCode=2 Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043404 4721 generic.go:334] "Generic (PLEG): container finished" podID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerID="f207fef2f76230ae29ed92563bfe6192bf4e5751fb81be7d8bc1400d4d42e087" exitCode=0 Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043469 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerDied","Data":"ab6eff23a607854558fcb2aa7cbae46f97306ff156928f4f08c74867721f4d2c"} Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043504 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerDied","Data":"3ad385a2be5227562f34db7d77851850b136530db655dfc87e87234604886068"} Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.043513 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerDied","Data":"f207fef2f76230ae29ed92563bfe6192bf4e5751fb81be7d8bc1400d4d42e087"} Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.050446 4721 generic.go:334] "Generic (PLEG): container finished" podID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerID="6b2f4ace0eb75cc9899716f3900092d7c461a3838ae11f1c1f66ca5572586d0f" exitCode=143 Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.050515 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerDied","Data":"6b2f4ace0eb75cc9899716f3900092d7c461a3838ae11f1c1f66ca5572586d0f"} Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.054383 4721 scope.go:117] "RemoveContainer" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" Feb 02 13:25:16 crc kubenswrapper[4721]: E0202 13:25:16.054701 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6c76b54b86-n9kln_openstack(c5f7cb67-4d7c-4bc8-bf45-c949450206f0)\"" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" Feb 02 13:25:16 crc kubenswrapper[4721]: I0202 13:25:16.425805 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" path="/var/lib/kubelet/pods/f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f/volumes" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.004637 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.072125 4721 generic.go:334] "Generic (PLEG): container finished" podID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerID="e352ddf5546d205ff44a3674807cce6288e4d9d2631c1e97a5b81f79535b48d6" exitCode=0 Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.072333 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerDied","Data":"e352ddf5546d205ff44a3674807cce6288e4d9d2631c1e97a5b81f79535b48d6"} Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.698367 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.837864 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.837937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.838000 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843210 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843527 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843552 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqt2\" (UniqueName: \"kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843699 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle\") pod \"7cdd3f19-3e66-4807-a0e8-957c713cef36\" (UID: \"7cdd3f19-3e66-4807-a0e8-957c713cef36\") " Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.843738 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.845482 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.846002 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs" (OuterVolumeSpecName: "logs") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.884514 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2" (OuterVolumeSpecName: "kube-api-access-vpqt2") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "kube-api-access-vpqt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.894330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts" (OuterVolumeSpecName: "scripts") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.948164 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cdd3f19-3e66-4807-a0e8-957c713cef36-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.948595 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.948682 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpqt2\" (UniqueName: \"kubernetes.io/projected/7cdd3f19-3e66-4807-a0e8-957c713cef36-kube-api-access-vpqt2\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:17 crc kubenswrapper[4721]: I0202 13:25:17.975308 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.058146 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.165426 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7cdd3f19-3e66-4807-a0e8-957c713cef36","Type":"ContainerDied","Data":"57b27fa9d3d382916d0fc10e2c5127febea2102b5b13a8f86a12bdcd66a461e9"} Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.165488 4721 scope.go:117] "RemoveContainer" containerID="e352ddf5546d205ff44a3674807cce6288e4d9d2631c1e97a5b81f79535b48d6" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.165674 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.188981 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data" (OuterVolumeSpecName: "config-data") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.209616 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663" (OuterVolumeSpecName: "glance") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.231252 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7cdd3f19-3e66-4807-a0e8-957c713cef36" (UID: "7cdd3f19-3e66-4807-a0e8-957c713cef36"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.272577 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.272657 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") on node \"crc\" " Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.272672 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cdd3f19-3e66-4807-a0e8-957c713cef36-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.333569 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.333751 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663") on node "crc" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.374471 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.399126 4721 scope.go:117] "RemoveContainer" containerID="60d5f5c74197287dc48184ed76b8f6d34b4d23a85669a6b5c075668ede91ce31" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.510437 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.528480 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.542274 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:18 crc kubenswrapper[4721]: E0202 13:25:18.543103 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="init" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543119 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="init" Feb 02 13:25:18 crc kubenswrapper[4721]: E0202 13:25:18.543149 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4f36c4-39c4-4cb4-b24c-676b76966752" containerName="heat-api" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543157 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4f36c4-39c4-4cb4-b24c-676b76966752" containerName="heat-api" Feb 02 13:25:18 crc kubenswrapper[4721]: E0202 13:25:18.543167 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-httpd" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543173 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-httpd" Feb 02 13:25:18 crc kubenswrapper[4721]: E0202 13:25:18.543192 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="dnsmasq-dns" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543197 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="dnsmasq-dns" Feb 02 13:25:18 crc kubenswrapper[4721]: E0202 13:25:18.543213 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-log" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543219 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-log" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543466 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-httpd" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543492 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" containerName="glance-log" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543515 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="dnsmasq-dns" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.543539 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4f36c4-39c4-4cb4-b24c-676b76966752" containerName="heat-api" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.545034 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.547873 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.548041 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.567308 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687148 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687244 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-config-data\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687293 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687336 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lj5\" (UniqueName: \"kubernetes.io/projected/23e9328e-fd9a-4a87-946b-2c46e25bea51-kube-api-access-89lj5\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687368 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-scripts\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687612 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.687669 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-logs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790541 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-logs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790638 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790719 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-config-data\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89lj5\" (UniqueName: \"kubernetes.io/projected/23e9328e-fd9a-4a87-946b-2c46e25bea51-kube-api-access-89lj5\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790841 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-scripts\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.790984 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.791166 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.794162 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-logs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.796029 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23e9328e-fd9a-4a87-946b-2c46e25bea51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.800338 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.800359 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.801025 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-config-data\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.801028 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23e9328e-fd9a-4a87-946b-2c46e25bea51-scripts\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.804178 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.804227 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c7812605d9919b226d4340fce797cd8fb18c9c948d1e68864aa7eb7aeecf4816/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.818858 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lj5\" (UniqueName: \"kubernetes.io/projected/23e9328e-fd9a-4a87-946b-2c46e25bea51-kube-api-access-89lj5\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:18 crc kubenswrapper[4721]: I0202 13:25:18.963678 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41e23a11-a3eb-45e5-a6c7-a5d94d094663\") pod \"glance-default-external-api-0\" (UID: \"23e9328e-fd9a-4a87-946b-2c46e25bea51\") " pod="openstack/glance-default-external-api-0" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.170410 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.193326 4721 generic.go:334] "Generic (PLEG): container finished" podID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerID="8d37382e602df793413e2180a90609111293c970770dc8adb79b0513cd945df9" exitCode=0 Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.193397 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerDied","Data":"8d37382e602df793413e2180a90609111293c970770dc8adb79b0513cd945df9"} Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.581223 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-9czmj" podUID="f1c8acb8-ffe3-467a-a19b-9ccdc0fca18f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: i/o timeout" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.863167 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.921098 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.921203 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.921262 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6x5k\" (UniqueName: \"kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.930166 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.934299 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.941160 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.949421 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs" (OuterVolumeSpecName: "logs") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.952530 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k" (OuterVolumeSpecName: "kube-api-access-r6x5k") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "kube-api-access-r6x5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.953085 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.953245 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.953292 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data\") pod \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\" (UID: \"8075cf6d-3ae0-468e-98cb-5f341d78b8ac\") " Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.954502 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.954517 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6x5k\" (UniqueName: \"kubernetes.io/projected/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-kube-api-access-r6x5k\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.954527 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:19 crc kubenswrapper[4721]: I0202 13:25:19.966342 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts" (OuterVolumeSpecName: "scripts") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.045599 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.087432 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.087471 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.132600 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.148316 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79" (OuterVolumeSpecName: "glance") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "pvc-56daf3c3-162c-4970-aab6-c4cecea22e79". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.214401 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") on node \"crc\" " Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.214446 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.225455 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.243854 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data" (OuterVolumeSpecName: "config-data") pod "8075cf6d-3ae0-468e-98cb-5f341d78b8ac" (UID: "8075cf6d-3ae0-468e-98cb-5f341d78b8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.269365 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8075cf6d-3ae0-468e-98cb-5f341d78b8ac","Type":"ContainerDied","Data":"901860dc4f2cb59ed85a54f8bc10b9859a36c07381edb1151f125b84138e4df8"} Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.269443 4721 scope.go:117] "RemoveContainer" containerID="8d37382e602df793413e2180a90609111293c970770dc8adb79b0513cd945df9" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.269544 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.299734 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.299934 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-56daf3c3-162c-4970-aab6-c4cecea22e79" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79") on node "crc" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.316927 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.316985 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8075cf6d-3ae0-468e-98cb-5f341d78b8ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.398401 4721 scope.go:117] "RemoveContainer" containerID="6b2f4ace0eb75cc9899716f3900092d7c461a3838ae11f1c1f66ca5572586d0f" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.462362 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdd3f19-3e66-4807-a0e8-957c713cef36" path="/var/lib/kubelet/pods/7cdd3f19-3e66-4807-a0e8-957c713cef36/volumes" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.602059 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.618395 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.637088 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:20 crc kubenswrapper[4721]: E0202 13:25:20.637632 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-httpd" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.637645 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-httpd" Feb 02 13:25:20 crc kubenswrapper[4721]: E0202 13:25:20.637662 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-log" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.637668 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-log" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.637863 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-log" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.637884 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" containerName="glance-httpd" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.641892 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.646188 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.646511 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.693595 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.831384 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-22bg7"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.833350 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843188 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843471 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843641 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843725 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.843996 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68kr\" (UniqueName: \"kubernetes.io/projected/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-kube-api-access-v68kr\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.844270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.844341 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.849641 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-22bg7"] Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948235 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68kr\" (UniqueName: \"kubernetes.io/projected/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-kube-api-access-v68kr\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948396 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948443 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz92z\" (UniqueName: \"kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948506 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948537 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948661 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948693 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.948770 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.951510 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.952053 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.955161 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.955203 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f813ebfbde533117d7c5539927c335132efd30cff3e1cb355d78cb9d4c1a927/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.963273 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.963479 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.963680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.968257 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:20 crc kubenswrapper[4721]: I0202 13:25:20.971160 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68kr\" (UniqueName: \"kubernetes.io/projected/5f5129b1-fc26-40ba-9cf7-0f86e93507cd-kube-api-access-v68kr\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.045232 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zbntf"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.048020 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.051119 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz92z\" (UniqueName: \"kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.051202 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.051970 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.086826 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz92z\" (UniqueName: \"kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z\") pod \"nova-api-db-create-22bg7\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.109199 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zbntf"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.135002 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56daf3c3-162c-4970-aab6-c4cecea22e79\") pod \"glance-default-internal-api-0\" (UID: \"5f5129b1-fc26-40ba-9cf7-0f86e93507cd\") " pod="openstack/glance-default-internal-api-0" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.146390 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="0eabfa0b-0304-4eda-8f8a-dc9160569e4b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.157103 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnpw\" (UniqueName: \"kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.157312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.195527 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.241814 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7hfxs"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.244918 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.261097 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnpw\" (UniqueName: \"kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.261266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.262285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.264319 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7hfxs"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.303688 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.308542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnpw\" (UniqueName: \"kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw\") pod \"nova-cell0-db-create-zbntf\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.315678 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-596786fd64-rpzql" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.317235 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ecf5-account-create-update-k6kdv"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.322259 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.334462 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.347923 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.360556 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ecf5-account-create-update-k6kdv"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.363863 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flbcl\" (UniqueName: \"kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.363927 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.441045 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"23e9328e-fd9a-4a87-946b-2c46e25bea51","Type":"ContainerStarted","Data":"03d1e984fbc3dacab36da010e889edf1bc89ded6833cebae3aae0dcfb162406b"} Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.489029 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.489187 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25zs\" (UniqueName: \"kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.489542 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.490119 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flbcl\" (UniqueName: \"kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.512806 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.572911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flbcl\" (UniqueName: \"kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl\") pod \"nova-cell1-db-create-7hfxs\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.608979 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5c37-account-create-update-h9w2m"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.615114 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.618136 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25zs\" (UniqueName: \"kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.618248 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.619240 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.619586 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.632950 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c37-account-create-update-h9w2m"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.662796 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25zs\" (UniqueName: \"kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs\") pod \"nova-api-ecf5-account-create-update-k6kdv\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.721575 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.722211 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-69599c8b5f-rjs76" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" containerID="cri-o://824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" gracePeriod=60 Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.728683 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.728777 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djb56\" (UniqueName: \"kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.733999 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.739731 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.754158 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.755351 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:21 crc kubenswrapper[4721]: E0202 13:25:21.756478 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:21 crc kubenswrapper[4721]: E0202 13:25:21.787500 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.805937 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:21 crc kubenswrapper[4721]: E0202 13:25:21.821671 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:21 crc kubenswrapper[4721]: E0202 13:25:21.822597 4721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-69599c8b5f-rjs76" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.831637 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.831688 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkl7\" (UniqueName: \"kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.831722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.831832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.832485 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djb56\" (UniqueName: \"kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.833239 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.839261 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6b23-account-create-update-5q82h"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.841373 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.849251 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.869723 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djb56\" (UniqueName: \"kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56\") pod \"nova-cell0-5c37-account-create-update-h9w2m\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.905633 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6b23-account-create-update-5q82h"] Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.949433 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.950057 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.950162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkl7\" (UniqueName: \"kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.966536 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.967142 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.967745 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdp4w\" (UniqueName: \"kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.968248 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:21 crc kubenswrapper[4721]: I0202 13:25:21.968507 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.024874 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkl7\" (UniqueName: \"kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7\") pod \"redhat-marketplace-bjlf4\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.070600 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.070841 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdp4w\" (UniqueName: \"kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.073733 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.115705 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdp4w\" (UniqueName: \"kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w\") pod \"nova-cell1-6b23-account-create-update-5q82h\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.124164 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.147499 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0eabfa0b-0304-4eda-8f8a-dc9160569e4b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.181674 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.353466 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-86974d69bd-t6gcz" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.459151 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8075cf6d-3ae0-468e-98cb-5f341d78b8ac" path="/var/lib/kubelet/pods/8075cf6d-3ae0-468e-98cb-5f341d78b8ac/volumes" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.503595 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.592142 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-22bg7"] Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.656367 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"23e9328e-fd9a-4a87-946b-2c46e25bea51","Type":"ContainerStarted","Data":"b6effbfcc05b55e6fc18e1c48e951cdd4e7c47b90122d732e13d24419723120f"} Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.685959 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-64c55c4cc7-4htzp" Feb 02 13:25:22 crc kubenswrapper[4721]: I0202 13:25:22.843833 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.699814 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.774281 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-869db4994-hgxnh" event={"ID":"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8","Type":"ContainerDied","Data":"b49739d0b82310eb7a88b3289808bff2fac1bf21a0b025f6531793460d8748fd"} Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.774624 4721 scope.go:117] "RemoveContainer" containerID="9892fb62f9251fa656b60a855b779d0ef03617e58fee62707134474322dbab8a" Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.777012 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.818419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-22bg7" event={"ID":"31d33cb7-8d98-44cc-97ef-229d34805e46","Type":"ContainerStarted","Data":"4fc7b5047fba1d65d14acd76d647401f0b37510534baf6ea3bf2254dfd744004"} Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.818511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-22bg7" event={"ID":"31d33cb7-8d98-44cc-97ef-229d34805e46","Type":"ContainerStarted","Data":"17351fe868bc448175365bc355dc57fe9dc3ca03a3c9d151703111ff468d604c"} Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.847641 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zbntf"] Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.885917 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom\") pod \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.886026 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95q9j\" (UniqueName: \"kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j\") pod \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.886142 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle\") pod \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.886273 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") pod \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.925093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j" (OuterVolumeSpecName: "kube-api-access-95q9j") pod "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" (UID: "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8"). InnerVolumeSpecName "kube-api-access-95q9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.936152 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-22bg7" podStartSLOduration=3.936101807 podStartE2EDuration="3.936101807s" podCreationTimestamp="2026-02-02 13:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:23.841437399 +0000 UTC m=+1464.143951788" watchObservedRunningTime="2026-02-02 13:25:23.936101807 +0000 UTC m=+1464.238616196" Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.938377 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" (UID: "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:23 crc kubenswrapper[4721]: I0202 13:25:23.959195 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" (UID: "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.040453 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data" (OuterVolumeSpecName: "config-data") pod "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" (UID: "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.057287 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") pod \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\" (UID: \"2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8\") " Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.058431 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.058460 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95q9j\" (UniqueName: \"kubernetes.io/projected/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-kube-api-access-95q9j\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.058475 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: W0202 13:25:24.058803 4721 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8/volumes/kubernetes.io~secret/config-data Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.058822 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data" (OuterVolumeSpecName: "config-data") pod "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" (UID: "2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: E0202 13:25:24.122319 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:24 crc kubenswrapper[4721]: E0202 13:25:24.162910 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:24 crc kubenswrapper[4721]: E0202 13:25:24.185525 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:24 crc kubenswrapper[4721]: E0202 13:25:24.185606 4721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-69599c8b5f-rjs76" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.205679 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.637531 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7hfxs"] Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.671153 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.675817 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ecf5-account-create-update-k6kdv"] Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.725969 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom\") pod \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.726147 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vs22\" (UniqueName: \"kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22\") pod \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.726190 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle\") pod \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.726315 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data\") pod \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\" (UID: \"c5f7cb67-4d7c-4bc8-bf45-c949450206f0\") " Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.738894 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22" (OuterVolumeSpecName: "kube-api-access-7vs22") pod "c5f7cb67-4d7c-4bc8-bf45-c949450206f0" (UID: "c5f7cb67-4d7c-4bc8-bf45-c949450206f0"). InnerVolumeSpecName "kube-api-access-7vs22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.756606 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5f7cb67-4d7c-4bc8-bf45-c949450206f0" (UID: "c5f7cb67-4d7c-4bc8-bf45-c949450206f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.796154 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5f7cb67-4d7c-4bc8-bf45-c949450206f0" (UID: "c5f7cb67-4d7c-4bc8-bf45-c949450206f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.846687 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.846734 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vs22\" (UniqueName: \"kubernetes.io/projected/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-kube-api-access-7vs22\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.846747 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.854627 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data" (OuterVolumeSpecName: "config-data") pod "c5f7cb67-4d7c-4bc8-bf45-c949450206f0" (UID: "c5f7cb67-4d7c-4bc8-bf45-c949450206f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.893781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" event={"ID":"c5f7cb67-4d7c-4bc8-bf45-c949450206f0","Type":"ContainerDied","Data":"622fe9b6176f587f10b3c6f779b6fa54763b7ccedb8408baf73c74843450fabf"} Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.893842 4721 scope.go:117] "RemoveContainer" containerID="fbc9c66b85258c4a25c9415a67c1a7b4969a3f4e6fb178430c5d1cfa81c475d3" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.893947 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c76b54b86-n9kln" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.950891 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f7cb67-4d7c-4bc8-bf45-c949450206f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.954985 4721 generic.go:334] "Generic (PLEG): container finished" podID="31d33cb7-8d98-44cc-97ef-229d34805e46" containerID="4fc7b5047fba1d65d14acd76d647401f0b37510534baf6ea3bf2254dfd744004" exitCode=0 Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.955057 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-22bg7" event={"ID":"31d33cb7-8d98-44cc-97ef-229d34805e46","Type":"ContainerDied","Data":"4fc7b5047fba1d65d14acd76d647401f0b37510534baf6ea3bf2254dfd744004"} Feb 02 13:25:24 crc kubenswrapper[4721]: I0202 13:25:24.973495 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c37-account-create-update-h9w2m"] Feb 02 13:25:25 crc kubenswrapper[4721]: W0202 13:25:24.999796 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd874ae_fdb8_4f98_ae51_dac54a44e001.slice/crio-878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409 WatchSource:0}: Error finding container 878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409: Status 404 returned error can't find the container with id 878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409 Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:24.999994 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hfxs" event={"ID":"147719a3-96ca-4551-a395-648dd45b4ce6","Type":"ContainerStarted","Data":"a55ae8457b5a788504e1e84ee76eca10d4fc6aebec31718324d6620e0d97f659"} Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.003349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" event={"ID":"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1","Type":"ContainerStarted","Data":"e8901988f690d98fcea72012066d25ce8684039b6b674156f34336f208a3df47"} Feb 02 13:25:25 crc kubenswrapper[4721]: W0202 13:25:25.008607 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod011e7b6f_64eb_48b5_be89_8304581d4c5f.slice/crio-5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8 WatchSource:0}: Error finding container 5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8: Status 404 returned error can't find the container with id 5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8 Feb 02 13:25:25 crc kubenswrapper[4721]: W0202 13:25:25.014683 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1487506_5263_4ffe_b3e0_1a7a507590f9.slice/crio-baa4f2b8c7f38c357e7fd8029e914fb27779853104aeda245f29bdec34cc4815 WatchSource:0}: Error finding container baa4f2b8c7f38c357e7fd8029e914fb27779853104aeda245f29bdec34cc4815: Status 404 returned error can't find the container with id baa4f2b8c7f38c357e7fd8029e914fb27779853104aeda245f29bdec34cc4815 Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.023547 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"23e9328e-fd9a-4a87-946b-2c46e25bea51","Type":"ContainerStarted","Data":"8bd63c89c14f1ccb69ddfb8d130bbe5d4974317b78ad4239a84a5e73a223cecb"} Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.030157 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.041097 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.062597 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c76b54b86-n9kln"] Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.072085 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-869db4994-hgxnh" Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.076663 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f5129b1-fc26-40ba-9cf7-0f86e93507cd","Type":"ContainerStarted","Data":"9c7d6cab875b3efe0c4b2c75baf8fffd41267d61cd17b3464709f39d27ada580"} Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.077710 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zbntf" event={"ID":"84e297f9-7808-4195-86b2-2c17f4638bf2","Type":"ContainerStarted","Data":"9f52f888eba9daf5e6e283524ff7481c4a05eeb6d1ae52e82e3fc8b08b7473c5"} Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.077730 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zbntf" event={"ID":"84e297f9-7808-4195-86b2-2c17f4638bf2","Type":"ContainerStarted","Data":"5f9fe02523a0af94db9eb026cf39983eed77325df1c327f0bf401d20a6a4ed62"} Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.083394 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6b23-account-create-update-5q82h"] Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.123378 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.123354387 podStartE2EDuration="7.123354387s" podCreationTimestamp="2026-02-02 13:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:25.050652535 +0000 UTC m=+1465.353166924" watchObservedRunningTime="2026-02-02 13:25:25.123354387 +0000 UTC m=+1465.425868786" Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.147266 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" probeResult="failure" output=< Feb 02 13:25:25 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:25:25 crc kubenswrapper[4721]: > Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.195496 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zbntf" podStartSLOduration=5.195473822 podStartE2EDuration="5.195473822s" podCreationTimestamp="2026-02-02 13:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:25.114644191 +0000 UTC m=+1465.417158580" watchObservedRunningTime="2026-02-02 13:25:25.195473822 +0000 UTC m=+1465.497988211" Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.245155 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:25 crc kubenswrapper[4721]: I0202 13:25:25.250252 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-869db4994-hgxnh"] Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.161435 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" event={"ID":"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1","Type":"ContainerStarted","Data":"d2ceca0ae565540870f02401237596a428eda4dcf72c0158ff8c7116acbfc486"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.166100 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="0eabfa0b-0304-4eda-8f8a-dc9160569e4b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.191378 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" event={"ID":"0dd874ae-fdb8-4f98-ae51-dac54a44e001","Type":"ContainerStarted","Data":"878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.220362 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerStarted","Data":"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.220415 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerStarted","Data":"baa4f2b8c7f38c357e7fd8029e914fb27779853104aeda245f29bdec34cc4815"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.247772 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" podStartSLOduration=5.247746731 podStartE2EDuration="5.247746731s" podCreationTimestamp="2026-02-02 13:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:26.201653301 +0000 UTC m=+1466.504167700" watchObservedRunningTime="2026-02-02 13:25:26.247746731 +0000 UTC m=+1466.550261140" Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.248316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" event={"ID":"011e7b6f-64eb-48b5-be89-8304581d4c5f","Type":"ContainerStarted","Data":"a4df6290a6ff822c9798aad4bb78bddad86f6ee3871a5520d115e6e491f3950e"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.248374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" event={"ID":"011e7b6f-64eb-48b5-be89-8304581d4c5f","Type":"ContainerStarted","Data":"5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.285299 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f5129b1-fc26-40ba-9cf7-0f86e93507cd","Type":"ContainerStarted","Data":"1e83aec53278f669956b6c12748294af646707461d1e4d3bcbf8fa106771c9c4"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.310407 4721 generic.go:334] "Generic (PLEG): container finished" podID="84e297f9-7808-4195-86b2-2c17f4638bf2" containerID="9f52f888eba9daf5e6e283524ff7481c4a05eeb6d1ae52e82e3fc8b08b7473c5" exitCode=0 Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.310766 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zbntf" event={"ID":"84e297f9-7808-4195-86b2-2c17f4638bf2","Type":"ContainerDied","Data":"9f52f888eba9daf5e6e283524ff7481c4a05eeb6d1ae52e82e3fc8b08b7473c5"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.328286 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" podStartSLOduration=5.328262514 podStartE2EDuration="5.328262514s" podCreationTimestamp="2026-02-02 13:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:26.289184874 +0000 UTC m=+1466.591699263" watchObservedRunningTime="2026-02-02 13:25:26.328262514 +0000 UTC m=+1466.630776903" Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.360815 4721 generic.go:334] "Generic (PLEG): container finished" podID="147719a3-96ca-4551-a395-648dd45b4ce6" containerID="663f048cc96da678155c1a95cb9691e5043c9f03a4259611e0be7f358517c2f1" exitCode=0 Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.361216 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hfxs" event={"ID":"147719a3-96ca-4551-a395-648dd45b4ce6","Type":"ContainerDied","Data":"663f048cc96da678155c1a95cb9691e5043c9f03a4259611e0be7f358517c2f1"} Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.455341 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" path="/var/lib/kubelet/pods/2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8/volumes" Feb 02 13:25:26 crc kubenswrapper[4721]: I0202 13:25:26.455972 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" path="/var/lib/kubelet/pods/c5f7cb67-4d7c-4bc8-bf45-c949450206f0/volumes" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.098724 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.152405 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0eabfa0b-0304-4eda-8f8a-dc9160569e4b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.169214 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.238900 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz92z\" (UniqueName: \"kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z\") pod \"31d33cb7-8d98-44cc-97ef-229d34805e46\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.239373 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts\") pod \"31d33cb7-8d98-44cc-97ef-229d34805e46\" (UID: \"31d33cb7-8d98-44cc-97ef-229d34805e46\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.244237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31d33cb7-8d98-44cc-97ef-229d34805e46" (UID: "31d33cb7-8d98-44cc-97ef-229d34805e46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.256216 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z" (OuterVolumeSpecName: "kube-api-access-cz92z") pod "31d33cb7-8d98-44cc-97ef-229d34805e46" (UID: "31d33cb7-8d98-44cc-97ef-229d34805e46"). InnerVolumeSpecName "kube-api-access-cz92z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.346121 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d33cb7-8d98-44cc-97ef-229d34805e46-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.346591 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz92z\" (UniqueName: \"kubernetes.io/projected/31d33cb7-8d98-44cc-97ef-229d34805e46-kube-api-access-cz92z\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.407994 4721 generic.go:334] "Generic (PLEG): container finished" podID="0dd874ae-fdb8-4f98-ae51-dac54a44e001" containerID="a5de70229867c13b14cef49c003c51ee5606bf312afc995295088db8627fee73" exitCode=0 Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.408153 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" event={"ID":"0dd874ae-fdb8-4f98-ae51-dac54a44e001","Type":"ContainerDied","Data":"a5de70229867c13b14cef49c003c51ee5606bf312afc995295088db8627fee73"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.438338 4721 generic.go:334] "Generic (PLEG): container finished" podID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerID="22439dbfbc2d5b9a7aaa38b029bb7c102abafe93d1e9397f189c54e9d9a93d8a" exitCode=0 Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.438496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerDied","Data":"22439dbfbc2d5b9a7aaa38b029bb7c102abafe93d1e9397f189c54e9d9a93d8a"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.456421 4721 generic.go:334] "Generic (PLEG): container finished" podID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerID="4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d" exitCode=0 Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.456507 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerDied","Data":"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.464615 4721 generic.go:334] "Generic (PLEG): container finished" podID="011e7b6f-64eb-48b5-be89-8304581d4c5f" containerID="a4df6290a6ff822c9798aad4bb78bddad86f6ee3871a5520d115e6e491f3950e" exitCode=0 Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.464695 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" event={"ID":"011e7b6f-64eb-48b5-be89-8304581d4c5f","Type":"ContainerDied","Data":"a4df6290a6ff822c9798aad4bb78bddad86f6ee3871a5520d115e6e491f3950e"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.467810 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f5129b1-fc26-40ba-9cf7-0f86e93507cd","Type":"ContainerStarted","Data":"df04a82f1d41907c901afd253e9b94f6426419fca2571acca9d0b478ca4084dd"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.502408 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-22bg7" event={"ID":"31d33cb7-8d98-44cc-97ef-229d34805e46","Type":"ContainerDied","Data":"17351fe868bc448175365bc355dc57fe9dc3ca03a3c9d151703111ff468d604c"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.502449 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17351fe868bc448175365bc355dc57fe9dc3ca03a3c9d151703111ff468d604c" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.502533 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-22bg7" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.508244 4721 generic.go:334] "Generic (PLEG): container finished" podID="a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" containerID="d2ceca0ae565540870f02401237596a428eda4dcf72c0158ff8c7116acbfc486" exitCode=0 Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.508514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" event={"ID":"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1","Type":"ContainerDied","Data":"d2ceca0ae565540870f02401237596a428eda4dcf72c0158ff8c7116acbfc486"} Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.525703 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.52567418 podStartE2EDuration="7.52567418s" podCreationTimestamp="2026-02-02 13:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:25:27.520584262 +0000 UTC m=+1467.823098651" watchObservedRunningTime="2026-02-02 13:25:27.52567418 +0000 UTC m=+1467.828188569" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.813335 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990297 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990401 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990421 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990524 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990582 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7n46\" (UniqueName: \"kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990796 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.990877 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml\") pod \"679713b8-7e9b-4ccc-87f3-85afd17dc008\" (UID: \"679713b8-7e9b-4ccc-87f3-85afd17dc008\") " Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.991192 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.991593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:27 crc kubenswrapper[4721]: I0202 13:25:27.991715 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.032559 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts" (OuterVolumeSpecName: "scripts") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.032732 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46" (OuterVolumeSpecName: "kube-api-access-r7n46") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "kube-api-access-r7n46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.095698 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679713b8-7e9b-4ccc-87f3-85afd17dc008-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.095735 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.095746 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7n46\" (UniqueName: \"kubernetes.io/projected/679713b8-7e9b-4ccc-87f3-85afd17dc008-kube-api-access-r7n46\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.118319 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.239480 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.264819 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.357935 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data" (OuterVolumeSpecName: "config-data") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.376303 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679713b8-7e9b-4ccc-87f3-85afd17dc008" (UID: "679713b8-7e9b-4ccc-87f3-85afd17dc008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.455608 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnpw\" (UniqueName: \"kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw\") pod \"84e297f9-7808-4195-86b2-2c17f4638bf2\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.455769 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts\") pod \"84e297f9-7808-4195-86b2-2c17f4638bf2\" (UID: \"84e297f9-7808-4195-86b2-2c17f4638bf2\") " Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.457608 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.457650 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679713b8-7e9b-4ccc-87f3-85afd17dc008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.458916 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84e297f9-7808-4195-86b2-2c17f4638bf2" (UID: "84e297f9-7808-4195-86b2-2c17f4638bf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.461543 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw" (OuterVolumeSpecName: "kube-api-access-bmnpw") pod "84e297f9-7808-4195-86b2-2c17f4638bf2" (UID: "84e297f9-7808-4195-86b2-2c17f4638bf2"). InnerVolumeSpecName "kube-api-access-bmnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.468560 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.533293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7hfxs" event={"ID":"147719a3-96ca-4551-a395-648dd45b4ce6","Type":"ContainerDied","Data":"a55ae8457b5a788504e1e84ee76eca10d4fc6aebec31718324d6620e0d97f659"} Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.533355 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55ae8457b5a788504e1e84ee76eca10d4fc6aebec31718324d6620e0d97f659" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.533422 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7hfxs" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.540764 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"679713b8-7e9b-4ccc-87f3-85afd17dc008","Type":"ContainerDied","Data":"6f08f3fdbee7bda0664d6d60e3f3f3c62d78c95caee038e2d39526bac0095990"} Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.540816 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.540989 4721 scope.go:117] "RemoveContainer" containerID="ab6eff23a607854558fcb2aa7cbae46f97306ff156928f4f08c74867721f4d2c" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.546612 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zbntf" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.546670 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zbntf" event={"ID":"84e297f9-7808-4195-86b2-2c17f4638bf2","Type":"ContainerDied","Data":"5f9fe02523a0af94db9eb026cf39983eed77325df1c327f0bf401d20a6a4ed62"} Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.546708 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f9fe02523a0af94db9eb026cf39983eed77325df1c327f0bf401d20a6a4ed62" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.559543 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts\") pod \"147719a3-96ca-4551-a395-648dd45b4ce6\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.559626 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flbcl\" (UniqueName: \"kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl\") pod \"147719a3-96ca-4551-a395-648dd45b4ce6\" (UID: \"147719a3-96ca-4551-a395-648dd45b4ce6\") " Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.561742 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147719a3-96ca-4551-a395-648dd45b4ce6" (UID: "147719a3-96ca-4551-a395-648dd45b4ce6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.581312 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl" (OuterVolumeSpecName: "kube-api-access-flbcl") pod "147719a3-96ca-4551-a395-648dd45b4ce6" (UID: "147719a3-96ca-4551-a395-648dd45b4ce6"). InnerVolumeSpecName "kube-api-access-flbcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.582249 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147719a3-96ca-4551-a395-648dd45b4ce6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.582275 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flbcl\" (UniqueName: \"kubernetes.io/projected/147719a3-96ca-4551-a395-648dd45b4ce6-kube-api-access-flbcl\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.582290 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnpw\" (UniqueName: \"kubernetes.io/projected/84e297f9-7808-4195-86b2-2c17f4638bf2-kube-api-access-bmnpw\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.582300 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e297f9-7808-4195-86b2-2c17f4638bf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.642329 4721 scope.go:117] "RemoveContainer" containerID="3ad385a2be5227562f34db7d77851850b136530db655dfc87e87234604886068" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.643415 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.680188 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727018 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727597 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d33cb7-8d98-44cc-97ef-229d34805e46" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727615 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d33cb7-8d98-44cc-97ef-229d34805e46" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727640 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-notification-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727648 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-notification-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727669 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727677 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727698 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727705 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727722 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727729 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727745 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="proxy-httpd" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727751 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="proxy-httpd" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727764 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-central-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727769 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-central-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727784 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e297f9-7808-4195-86b2-2c17f4638bf2" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727790 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e297f9-7808-4195-86b2-2c17f4638bf2" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727798 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147719a3-96ca-4551-a395-648dd45b4ce6" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727804 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="147719a3-96ca-4551-a395-648dd45b4ce6" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.727815 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="sg-core" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.727821 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="sg-core" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728102 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728117 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="proxy-httpd" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728129 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc9b06c-1d5d-4e57-b1ab-13bdd82098b8" containerName="heat-api" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728139 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-notification-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728154 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d33cb7-8d98-44cc-97ef-229d34805e46" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728163 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728177 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="ceilometer-central-agent" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728186 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="147719a3-96ca-4551-a395-648dd45b4ce6" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728198 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728206 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" containerName="sg-core" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728220 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e297f9-7808-4195-86b2-2c17f4638bf2" containerName="mariadb-database-create" Feb 02 13:25:28 crc kubenswrapper[4721]: E0202 13:25:28.728445 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.728453 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f7cb67-4d7c-4bc8-bf45-c949450206f0" containerName="heat-cfnapi" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.743666 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.744420 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.749491 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.749606 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.757297 4721 scope.go:117] "RemoveContainer" containerID="f207fef2f76230ae29ed92563bfe6192bf4e5751fb81be7d8bc1400d4d42e087" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.889500 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890034 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890447 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890504 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890586 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.890608 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85frw\" (UniqueName: \"kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.996780 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.996979 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997000 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997023 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997087 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997107 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85frw\" (UniqueName: \"kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997217 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997715 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:28 crc kubenswrapper[4721]: I0202 13:25:28.997740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.004282 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.006801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.014519 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.031920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85frw\" (UniqueName: \"kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.035829 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.069403 4721 scope.go:117] "RemoveContainer" containerID="22439dbfbc2d5b9a7aaa38b029bb7c102abafe93d1e9397f189c54e9d9a93d8a" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.090653 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.129344 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.172395 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.172446 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.226137 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djb56\" (UniqueName: \"kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56\") pod \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.226399 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts\") pod \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\" (UID: \"0dd874ae-fdb8-4f98-ae51-dac54a44e001\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.228037 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dd874ae-fdb8-4f98-ae51-dac54a44e001" (UID: "0dd874ae-fdb8-4f98-ae51-dac54a44e001"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.239697 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56" (OuterVolumeSpecName: "kube-api-access-djb56") pod "0dd874ae-fdb8-4f98-ae51-dac54a44e001" (UID: "0dd874ae-fdb8-4f98-ae51-dac54a44e001"). InnerVolumeSpecName "kube-api-access-djb56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.325158 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.329892 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd874ae-fdb8-4f98-ae51-dac54a44e001-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.329927 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djb56\" (UniqueName: \"kubernetes.io/projected/0dd874ae-fdb8-4f98-ae51-dac54a44e001-kube-api-access-djb56\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.338285 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.526942 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.539618 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.583165 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" event={"ID":"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1","Type":"ContainerDied","Data":"e8901988f690d98fcea72012066d25ce8684039b6b674156f34336f208a3df47"} Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.583546 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8901988f690d98fcea72012066d25ce8684039b6b674156f34336f208a3df47" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.583630 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ecf5-account-create-update-k6kdv" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.621939 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.630655 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c37-account-create-update-h9w2m" event={"ID":"0dd874ae-fdb8-4f98-ae51-dac54a44e001","Type":"ContainerDied","Data":"878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409"} Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.630716 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878a87f71ad3d4e4d73726cacc66c5efd64f6c529a185c729f0711dd8a268409" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.639637 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts\") pod \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.639859 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdp4w\" (UniqueName: \"kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w\") pod \"011e7b6f-64eb-48b5-be89-8304581d4c5f\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.639956 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25zs\" (UniqueName: \"kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs\") pod \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\" (UID: \"a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.640003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts\") pod \"011e7b6f-64eb-48b5-be89-8304581d4c5f\" (UID: \"011e7b6f-64eb-48b5-be89-8304581d4c5f\") " Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.640903 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.641160 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6b23-account-create-update-5q82h" event={"ID":"011e7b6f-64eb-48b5-be89-8304581d4c5f","Type":"ContainerDied","Data":"5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8"} Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.641194 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d76c238acf537be4b1d7f54733a1d3d5ad8fa6ffa042633ba6b38b8df9885e8" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.641215 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.641359 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.641475 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "011e7b6f-64eb-48b5-be89-8304581d4c5f" (UID: "011e7b6f-64eb-48b5-be89-8304581d4c5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.643576 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" (UID: "a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.647315 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w" (OuterVolumeSpecName: "kube-api-access-kdp4w") pod "011e7b6f-64eb-48b5-be89-8304581d4c5f" (UID: "011e7b6f-64eb-48b5-be89-8304581d4c5f"). InnerVolumeSpecName "kube-api-access-kdp4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.653747 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs" (OuterVolumeSpecName: "kube-api-access-k25zs") pod "a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" (UID: "a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1"). InnerVolumeSpecName "kube-api-access-k25zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.749050 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.749411 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdp4w\" (UniqueName: \"kubernetes.io/projected/011e7b6f-64eb-48b5-be89-8304581d4c5f-kube-api-access-kdp4w\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.749428 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25zs\" (UniqueName: \"kubernetes.io/projected/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1-kube-api-access-k25zs\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:29 crc kubenswrapper[4721]: I0202 13:25:29.749439 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/011e7b6f-64eb-48b5-be89-8304581d4c5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:30 crc kubenswrapper[4721]: I0202 13:25:30.257947 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:30 crc kubenswrapper[4721]: I0202 13:25:30.423626 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679713b8-7e9b-4ccc-87f3-85afd17dc008" path="/var/lib/kubelet/pods/679713b8-7e9b-4ccc-87f3-85afd17dc008/volumes" Feb 02 13:25:30 crc kubenswrapper[4721]: I0202 13:25:30.661318 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerStarted","Data":"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d"} Feb 02 13:25:30 crc kubenswrapper[4721]: I0202 13:25:30.666668 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerStarted","Data":"17422626d1aef368e4a89ff57fcd151611e206eb400b70c99ece56e2a2b12dbd"} Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.304915 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.304962 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.365602 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.375260 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.678825 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.679197 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.679383 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.679422 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.882933 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9h27j"] Feb 02 13:25:31 crc kubenswrapper[4721]: E0202 13:25:31.883571 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.883596 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: E0202 13:25:31.883633 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd874ae-fdb8-4f98-ae51-dac54a44e001" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.883642 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd874ae-fdb8-4f98-ae51-dac54a44e001" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: E0202 13:25:31.883663 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011e7b6f-64eb-48b5-be89-8304581d4c5f" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.883674 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="011e7b6f-64eb-48b5-be89-8304581d4c5f" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.883987 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd874ae-fdb8-4f98-ae51-dac54a44e001" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.884016 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="011e7b6f-64eb-48b5-be89-8304581d4c5f" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.884030 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" containerName="mariadb-account-create-update" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.884969 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.886833 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq69d" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.887326 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.887628 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 13:25:31 crc kubenswrapper[4721]: I0202 13:25:31.902486 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9h27j"] Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.024378 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.024721 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8q5\" (UniqueName: \"kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.024848 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.025040 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.127310 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8q5\" (UniqueName: \"kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.127646 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.127706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.127770 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.135614 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.135911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.136706 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.148499 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8q5\" (UniqueName: \"kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5\") pod \"nova-cell0-conductor-db-sync-9h27j\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.208081 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.711696 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerStarted","Data":"250d61da9c9aec53fa8403200cdb7bf07e9fd015ad372194d53e43f3edd63adb"} Feb 02 13:25:32 crc kubenswrapper[4721]: W0202 13:25:32.749625 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6 WatchSource:0}: Error finding container 5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6: Status 404 returned error can't find the container with id 5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6 Feb 02 13:25:32 crc kubenswrapper[4721]: I0202 13:25:32.762030 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9h27j"] Feb 02 13:25:33 crc kubenswrapper[4721]: I0202 13:25:33.747496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerStarted","Data":"b0b5571eb117a42c85d555cc9f73ebf04bc2152bc212ed2d51da96cbfa4c0649"} Feb 02 13:25:33 crc kubenswrapper[4721]: I0202 13:25:33.755433 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9h27j" event={"ID":"f01b253a-c7c6-4c9e-a800-a1732ba06f37","Type":"ContainerStarted","Data":"5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6"} Feb 02 13:25:34 crc kubenswrapper[4721]: E0202 13:25:34.011864 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:34 crc kubenswrapper[4721]: E0202 13:25:34.013292 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:34 crc kubenswrapper[4721]: E0202 13:25:34.014641 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 02 13:25:34 crc kubenswrapper[4721]: E0202 13:25:34.014674 4721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-69599c8b5f-rjs76" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:34 crc kubenswrapper[4721]: I0202 13:25:34.134785 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:25:34 crc kubenswrapper[4721]: I0202 13:25:34.194892 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:25:34 crc kubenswrapper[4721]: I0202 13:25:34.935277 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:25:35 crc kubenswrapper[4721]: I0202 13:25:35.199635 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:25:35 crc kubenswrapper[4721]: I0202 13:25:35.798392 4721 generic.go:334] "Generic (PLEG): container finished" podID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerID="286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d" exitCode=0 Feb 02 13:25:35 crc kubenswrapper[4721]: I0202 13:25:35.798488 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerDied","Data":"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d"} Feb 02 13:25:35 crc kubenswrapper[4721]: I0202 13:25:35.799097 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mlbxn" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" containerID="cri-o://c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a" gracePeriod=2 Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.541389 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.625916 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.662694 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data\") pod \"ce072a84-75da-4060-9c4a-d029b3a14947\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.664198 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxc9d\" (UniqueName: \"kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d\") pod \"ce072a84-75da-4060-9c4a-d029b3a14947\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.664484 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle\") pod \"ce072a84-75da-4060-9c4a-d029b3a14947\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.664702 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content\") pod \"37372b76-ef54-4a44-9b56-dea754373219\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.664825 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom\") pod \"ce072a84-75da-4060-9c4a-d029b3a14947\" (UID: \"ce072a84-75da-4060-9c4a-d029b3a14947\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.664998 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmwvt\" (UniqueName: \"kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt\") pod \"37372b76-ef54-4a44-9b56-dea754373219\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.665140 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities\") pod \"37372b76-ef54-4a44-9b56-dea754373219\" (UID: \"37372b76-ef54-4a44-9b56-dea754373219\") " Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.666497 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities" (OuterVolumeSpecName: "utilities") pod "37372b76-ef54-4a44-9b56-dea754373219" (UID: "37372b76-ef54-4a44-9b56-dea754373219"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.672951 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce072a84-75da-4060-9c4a-d029b3a14947" (UID: "ce072a84-75da-4060-9c4a-d029b3a14947"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.677256 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt" (OuterVolumeSpecName: "kube-api-access-dmwvt") pod "37372b76-ef54-4a44-9b56-dea754373219" (UID: "37372b76-ef54-4a44-9b56-dea754373219"). InnerVolumeSpecName "kube-api-access-dmwvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.681143 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d" (OuterVolumeSpecName: "kube-api-access-wxc9d") pod "ce072a84-75da-4060-9c4a-d029b3a14947" (UID: "ce072a84-75da-4060-9c4a-d029b3a14947"). InnerVolumeSpecName "kube-api-access-wxc9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.769727 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce072a84-75da-4060-9c4a-d029b3a14947" (UID: "ce072a84-75da-4060-9c4a-d029b3a14947"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.774343 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxc9d\" (UniqueName: \"kubernetes.io/projected/ce072a84-75da-4060-9c4a-d029b3a14947-kube-api-access-wxc9d\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.774404 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.774423 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.774440 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmwvt\" (UniqueName: \"kubernetes.io/projected/37372b76-ef54-4a44-9b56-dea754373219-kube-api-access-dmwvt\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.774457 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.808719 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data" (OuterVolumeSpecName: "config-data") pod "ce072a84-75da-4060-9c4a-d029b3a14947" (UID: "ce072a84-75da-4060-9c4a-d029b3a14947"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.825553 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37372b76-ef54-4a44-9b56-dea754373219" (UID: "37372b76-ef54-4a44-9b56-dea754373219"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.830515 4721 generic.go:334] "Generic (PLEG): container finished" podID="ce072a84-75da-4060-9c4a-d029b3a14947" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" exitCode=0 Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.830722 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69599c8b5f-rjs76" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.831112 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69599c8b5f-rjs76" event={"ID":"ce072a84-75da-4060-9c4a-d029b3a14947","Type":"ContainerDied","Data":"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062"} Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.831163 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69599c8b5f-rjs76" event={"ID":"ce072a84-75da-4060-9c4a-d029b3a14947","Type":"ContainerDied","Data":"c5aaadf4a39fbac2caf4ce2bae03ec472daa951b771c303d721f863e7147d5f2"} Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.831181 4721 scope.go:117] "RemoveContainer" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.871486 4721 generic.go:334] "Generic (PLEG): container finished" podID="37372b76-ef54-4a44-9b56-dea754373219" containerID="c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a" exitCode=0 Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.872119 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerDied","Data":"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a"} Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.872246 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlbxn" event={"ID":"37372b76-ef54-4a44-9b56-dea754373219","Type":"ContainerDied","Data":"527e8468434eea06358e3d4622c114662919b1ba98ef618fb71f16dfc7759e5a"} Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.872438 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlbxn" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.876829 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce072a84-75da-4060-9c4a-d029b3a14947-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.876865 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37372b76-ef54-4a44-9b56-dea754373219-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.923365 4721 scope.go:117] "RemoveContainer" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" Feb 02 13:25:36 crc kubenswrapper[4721]: E0202 13:25:36.934466 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062\": container with ID starting with 824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062 not found: ID does not exist" containerID="824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.934526 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062"} err="failed to get container status \"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062\": rpc error: code = NotFound desc = could not find container \"824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062\": container with ID starting with 824ac82da4e9fbe898c73f52200c686a08effd8f70bdcb2e1cbad03807a59062 not found: ID does not exist" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.934558 4721 scope.go:117] "RemoveContainer" containerID="c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a" Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.936402 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:25:36 crc kubenswrapper[4721]: I0202 13:25:36.949794 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-69599c8b5f-rjs76"] Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.018895 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.027278 4721 scope.go:117] "RemoveContainer" containerID="e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.029625 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mlbxn"] Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.083270 4721 scope.go:117] "RemoveContainer" containerID="be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.160492 4721 scope.go:117] "RemoveContainer" containerID="c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a" Feb 02 13:25:37 crc kubenswrapper[4721]: E0202 13:25:37.162680 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a\": container with ID starting with c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a not found: ID does not exist" containerID="c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.162716 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a"} err="failed to get container status \"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a\": rpc error: code = NotFound desc = could not find container \"c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a\": container with ID starting with c5c7de4e95c2706b26ef16b1a2612a77aaede7e751f9c5dde67c52ea9832ec9a not found: ID does not exist" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.162739 4721 scope.go:117] "RemoveContainer" containerID="e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a" Feb 02 13:25:37 crc kubenswrapper[4721]: E0202 13:25:37.163370 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a\": container with ID starting with e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a not found: ID does not exist" containerID="e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.163394 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a"} err="failed to get container status \"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a\": rpc error: code = NotFound desc = could not find container \"e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a\": container with ID starting with e7fd605be804cd647b2eac1ab49ff2a82cd119fbc55b1ed46d82b6f010b0c40a not found: ID does not exist" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.163410 4721 scope.go:117] "RemoveContainer" containerID="be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981" Feb 02 13:25:37 crc kubenswrapper[4721]: E0202 13:25:37.163981 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981\": container with ID starting with be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981 not found: ID does not exist" containerID="be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.164002 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981"} err="failed to get container status \"be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981\": rpc error: code = NotFound desc = could not find container \"be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981\": container with ID starting with be431440dec21df753f3a84f641990a0bc2d2448dae27123ad4db374fc8f8981 not found: ID does not exist" Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.888965 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerStarted","Data":"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5"} Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.899430 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerStarted","Data":"3b0302d825319ad93ac4b7a0c298d8bf5cb6775771000ecd18cf0462db227389"} Feb 02 13:25:37 crc kubenswrapper[4721]: I0202 13:25:37.924365 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjlf4" podStartSLOduration=7.933875262 podStartE2EDuration="16.924339051s" podCreationTimestamp="2026-02-02 13:25:21 +0000 UTC" firstStartedPulling="2026-02-02 13:25:27.45931978 +0000 UTC m=+1467.761834169" lastFinishedPulling="2026-02-02 13:25:36.449783569 +0000 UTC m=+1476.752297958" observedRunningTime="2026-02-02 13:25:37.907469503 +0000 UTC m=+1478.209983912" watchObservedRunningTime="2026-02-02 13:25:37.924339051 +0000 UTC m=+1478.226853450" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.441458 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37372b76-ef54-4a44-9b56-dea754373219" path="/var/lib/kubelet/pods/37372b76-ef54-4a44-9b56-dea754373219/volumes" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.446442 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" path="/var/lib/kubelet/pods/ce072a84-75da-4060-9c4a-d029b3a14947/volumes" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.650905 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.653089 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.653263 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.654824 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 13:25:38 crc kubenswrapper[4721]: I0202 13:25:38.654879 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.929922 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerStarted","Data":"b559fdc4b6fca20276abf4dceba9979868b3f8f7cc76c9ba5d01cbaddb599222"} Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.930173 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-central-agent" containerID="cri-o://250d61da9c9aec53fa8403200cdb7bf07e9fd015ad372194d53e43f3edd63adb" gracePeriod=30 Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.930215 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="sg-core" containerID="cri-o://3b0302d825319ad93ac4b7a0c298d8bf5cb6775771000ecd18cf0462db227389" gracePeriod=30 Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.930249 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="proxy-httpd" containerID="cri-o://b559fdc4b6fca20276abf4dceba9979868b3f8f7cc76c9ba5d01cbaddb599222" gracePeriod=30 Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.930286 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-notification-agent" containerID="cri-o://b0b5571eb117a42c85d555cc9f73ebf04bc2152bc212ed2d51da96cbfa4c0649" gracePeriod=30 Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.930508 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:25:39 crc kubenswrapper[4721]: I0202 13:25:39.958916 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.120751072 podStartE2EDuration="11.95889688s" podCreationTimestamp="2026-02-02 13:25:28 +0000 UTC" firstStartedPulling="2026-02-02 13:25:30.256487452 +0000 UTC m=+1470.559001841" lastFinishedPulling="2026-02-02 13:25:39.09463326 +0000 UTC m=+1479.397147649" observedRunningTime="2026-02-02 13:25:39.953623777 +0000 UTC m=+1480.256138176" watchObservedRunningTime="2026-02-02 13:25:39.95889688 +0000 UTC m=+1480.261411269" Feb 02 13:25:40 crc kubenswrapper[4721]: I0202 13:25:40.948660 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerID="3b0302d825319ad93ac4b7a0c298d8bf5cb6775771000ecd18cf0462db227389" exitCode=2 Feb 02 13:25:40 crc kubenswrapper[4721]: I0202 13:25:40.948930 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerID="b0b5571eb117a42c85d555cc9f73ebf04bc2152bc212ed2d51da96cbfa4c0649" exitCode=0 Feb 02 13:25:40 crc kubenswrapper[4721]: I0202 13:25:40.948958 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerDied","Data":"3b0302d825319ad93ac4b7a0c298d8bf5cb6775771000ecd18cf0462db227389"} Feb 02 13:25:40 crc kubenswrapper[4721]: I0202 13:25:40.948991 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerDied","Data":"b0b5571eb117a42c85d555cc9f73ebf04bc2152bc212ed2d51da96cbfa4c0649"} Feb 02 13:25:42 crc kubenswrapper[4721]: I0202 13:25:42.125896 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:42 crc kubenswrapper[4721]: I0202 13:25:42.126288 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:43 crc kubenswrapper[4721]: I0202 13:25:43.212465 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bjlf4" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="registry-server" probeResult="failure" output=< Feb 02 13:25:43 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:25:43 crc kubenswrapper[4721]: > Feb 02 13:25:44 crc kubenswrapper[4721]: I0202 13:25:44.763532 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:25:44 crc kubenswrapper[4721]: I0202 13:25:44.763936 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:25:44 crc kubenswrapper[4721]: I0202 13:25:44.763984 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:25:44 crc kubenswrapper[4721]: I0202 13:25:44.764863 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:25:44 crc kubenswrapper[4721]: I0202 13:25:44.764907 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac" gracePeriod=600 Feb 02 13:25:45 crc kubenswrapper[4721]: I0202 13:25:45.004844 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac" exitCode=0 Feb 02 13:25:45 crc kubenswrapper[4721]: I0202 13:25:45.004884 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac"} Feb 02 13:25:45 crc kubenswrapper[4721]: I0202 13:25:45.004915 4721 scope.go:117] "RemoveContainer" containerID="56e02e958f304734b98b90c0b35547a7aaeb3ba27ad6cd35ef754f549abd2513" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.056015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c"} Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.058467 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9h27j" event={"ID":"f01b253a-c7c6-4c9e-a800-a1732ba06f37","Type":"ContainerStarted","Data":"8b1af4c2f51b92e7d8e686eda8c46c518ebd4e4b694ff77617f39b7b376a484a"} Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.100315 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9h27j" podStartSLOduration=2.8789096130000003 podStartE2EDuration="18.100292093s" podCreationTimestamp="2026-02-02 13:25:31 +0000 UTC" firstStartedPulling="2026-02-02 13:25:32.752278969 +0000 UTC m=+1473.054793368" lastFinishedPulling="2026-02-02 13:25:47.973661459 +0000 UTC m=+1488.276175848" observedRunningTime="2026-02-02 13:25:49.090641711 +0000 UTC m=+1489.393156100" watchObservedRunningTime="2026-02-02 13:25:49.100292093 +0000 UTC m=+1489.402806502" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.881790 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:25:49 crc kubenswrapper[4721]: E0202 13:25:49.883237 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883262 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" Feb 02 13:25:49 crc kubenswrapper[4721]: E0202 13:25:49.883283 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="extract-utilities" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883292 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="extract-utilities" Feb 02 13:25:49 crc kubenswrapper[4721]: E0202 13:25:49.883322 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883332 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:49 crc kubenswrapper[4721]: E0202 13:25:49.883365 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="extract-content" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883376 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="extract-content" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883639 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce072a84-75da-4060-9c4a-d029b3a14947" containerName="heat-engine" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.883671 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="37372b76-ef54-4a44-9b56-dea754373219" containerName="registry-server" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.885773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:49 crc kubenswrapper[4721]: I0202 13:25:49.900733 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.015536 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.015642 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.015781 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ncw\" (UniqueName: \"kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.117530 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ncw\" (UniqueName: \"kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.117638 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.117709 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.118431 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.118701 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.149326 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ncw\" (UniqueName: \"kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw\") pod \"certified-operators-6vl65\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.204805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:25:50 crc kubenswrapper[4721]: W0202 13:25:50.767701 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2db9546_309a_4e2a_8b57_e2986e6cb500.slice/crio-ae96317dbbd9659e948a12834719912af10f2e2c9b3f171a99e46548715f116d WatchSource:0}: Error finding container ae96317dbbd9659e948a12834719912af10f2e2c9b3f171a99e46548715f116d: Status 404 returned error can't find the container with id ae96317dbbd9659e948a12834719912af10f2e2c9b3f171a99e46548715f116d Feb 02 13:25:50 crc kubenswrapper[4721]: I0202 13:25:50.779913 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:25:51 crc kubenswrapper[4721]: I0202 13:25:51.086942 4721 generic.go:334] "Generic (PLEG): container finished" podID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerID="214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169" exitCode=0 Feb 02 13:25:51 crc kubenswrapper[4721]: I0202 13:25:51.087041 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerDied","Data":"214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169"} Feb 02 13:25:51 crc kubenswrapper[4721]: I0202 13:25:51.087339 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerStarted","Data":"ae96317dbbd9659e948a12834719912af10f2e2c9b3f171a99e46548715f116d"} Feb 02 13:25:51 crc kubenswrapper[4721]: I0202 13:25:51.090927 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerID="250d61da9c9aec53fa8403200cdb7bf07e9fd015ad372194d53e43f3edd63adb" exitCode=0 Feb 02 13:25:51 crc kubenswrapper[4721]: I0202 13:25:51.090970 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerDied","Data":"250d61da9c9aec53fa8403200cdb7bf07e9fd015ad372194d53e43f3edd63adb"} Feb 02 13:25:52 crc kubenswrapper[4721]: I0202 13:25:52.173911 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:52 crc kubenswrapper[4721]: I0202 13:25:52.237102 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:53 crc kubenswrapper[4721]: I0202 13:25:53.127237 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerStarted","Data":"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4"} Feb 02 13:25:53 crc kubenswrapper[4721]: I0202 13:25:53.849812 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.134461 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjlf4" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="registry-server" containerID="cri-o://8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5" gracePeriod=2 Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.700921 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.739203 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content\") pod \"f1487506-5263-4ffe-b3e0-1a7a507590f9\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.739256 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities\") pod \"f1487506-5263-4ffe-b3e0-1a7a507590f9\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.739486 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvkl7\" (UniqueName: \"kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7\") pod \"f1487506-5263-4ffe-b3e0-1a7a507590f9\" (UID: \"f1487506-5263-4ffe-b3e0-1a7a507590f9\") " Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.740271 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities" (OuterVolumeSpecName: "utilities") pod "f1487506-5263-4ffe-b3e0-1a7a507590f9" (UID: "f1487506-5263-4ffe-b3e0-1a7a507590f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.764977 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1487506-5263-4ffe-b3e0-1a7a507590f9" (UID: "f1487506-5263-4ffe-b3e0-1a7a507590f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.765799 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7" (OuterVolumeSpecName: "kube-api-access-rvkl7") pod "f1487506-5263-4ffe-b3e0-1a7a507590f9" (UID: "f1487506-5263-4ffe-b3e0-1a7a507590f9"). InnerVolumeSpecName "kube-api-access-rvkl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.842052 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.842119 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1487506-5263-4ffe-b3e0-1a7a507590f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:54 crc kubenswrapper[4721]: I0202 13:25:54.842129 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvkl7\" (UniqueName: \"kubernetes.io/projected/f1487506-5263-4ffe-b3e0-1a7a507590f9-kube-api-access-rvkl7\") on node \"crc\" DevicePath \"\"" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.150175 4721 generic.go:334] "Generic (PLEG): container finished" podID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerID="8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5" exitCode=0 Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.150215 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerDied","Data":"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5"} Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.150240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjlf4" event={"ID":"f1487506-5263-4ffe-b3e0-1a7a507590f9","Type":"ContainerDied","Data":"baa4f2b8c7f38c357e7fd8029e914fb27779853104aeda245f29bdec34cc4815"} Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.150257 4721 scope.go:117] "RemoveContainer" containerID="8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.150388 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjlf4" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.174349 4721 scope.go:117] "RemoveContainer" containerID="286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.194030 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.207971 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjlf4"] Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.214113 4721 scope.go:117] "RemoveContainer" containerID="4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d" Feb 02 13:25:55 crc kubenswrapper[4721]: E0202 13:25:55.329822 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1487506_5263_4ffe_b3e0_1a7a507590f9.slice\": RecentStats: unable to find data in memory cache]" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.344229 4721 scope.go:117] "RemoveContainer" containerID="8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5" Feb 02 13:25:55 crc kubenswrapper[4721]: E0202 13:25:55.349781 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5\": container with ID starting with 8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5 not found: ID does not exist" containerID="8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.349832 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5"} err="failed to get container status \"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5\": rpc error: code = NotFound desc = could not find container \"8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5\": container with ID starting with 8abc226211c9966989623516d814dc227ce6e9431d716262c64d61c953cd9de5 not found: ID does not exist" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.349859 4721 scope.go:117] "RemoveContainer" containerID="286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d" Feb 02 13:25:55 crc kubenswrapper[4721]: E0202 13:25:55.350560 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d\": container with ID starting with 286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d not found: ID does not exist" containerID="286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.350598 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d"} err="failed to get container status \"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d\": rpc error: code = NotFound desc = could not find container \"286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d\": container with ID starting with 286f545d594e5abd14663ecadd58ca5f7b2093d194dddbbe4cdbc1b3011f703d not found: ID does not exist" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.350624 4721 scope.go:117] "RemoveContainer" containerID="4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d" Feb 02 13:25:55 crc kubenswrapper[4721]: E0202 13:25:55.351048 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d\": container with ID starting with 4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d not found: ID does not exist" containerID="4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d" Feb 02 13:25:55 crc kubenswrapper[4721]: I0202 13:25:55.351116 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d"} err="failed to get container status \"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d\": rpc error: code = NotFound desc = could not find container \"4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d\": container with ID starting with 4e9ec40449c9a13570d177a0fa260a694310bc1d1b70021291444805d29dc83d not found: ID does not exist" Feb 02 13:25:56 crc kubenswrapper[4721]: I0202 13:25:56.174988 4721 generic.go:334] "Generic (PLEG): container finished" podID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerID="1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4" exitCode=0 Feb 02 13:25:56 crc kubenswrapper[4721]: I0202 13:25:56.175106 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerDied","Data":"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4"} Feb 02 13:25:56 crc kubenswrapper[4721]: I0202 13:25:56.423729 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" path="/var/lib/kubelet/pods/f1487506-5263-4ffe-b3e0-1a7a507590f9/volumes" Feb 02 13:25:57 crc kubenswrapper[4721]: I0202 13:25:57.193135 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerStarted","Data":"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917"} Feb 02 13:25:57 crc kubenswrapper[4721]: I0202 13:25:57.210688 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vl65" podStartSLOduration=2.585467734 podStartE2EDuration="8.210671665s" podCreationTimestamp="2026-02-02 13:25:49 +0000 UTC" firstStartedPulling="2026-02-02 13:25:51.088983209 +0000 UTC m=+1491.391497598" lastFinishedPulling="2026-02-02 13:25:56.71418714 +0000 UTC m=+1497.016701529" observedRunningTime="2026-02-02 13:25:57.209630087 +0000 UTC m=+1497.512144486" watchObservedRunningTime="2026-02-02 13:25:57.210671665 +0000 UTC m=+1497.513186054" Feb 02 13:25:59 crc kubenswrapper[4721]: I0202 13:25:59.097680 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 13:26:00 crc kubenswrapper[4721]: I0202 13:26:00.206104 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:00 crc kubenswrapper[4721]: I0202 13:26:00.206390 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:00 crc kubenswrapper[4721]: I0202 13:26:00.265950 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:04 crc kubenswrapper[4721]: I0202 13:26:04.266042 4721 generic.go:334] "Generic (PLEG): container finished" podID="f01b253a-c7c6-4c9e-a800-a1732ba06f37" containerID="8b1af4c2f51b92e7d8e686eda8c46c518ebd4e4b694ff77617f39b7b376a484a" exitCode=0 Feb 02 13:26:04 crc kubenswrapper[4721]: I0202 13:26:04.266129 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9h27j" event={"ID":"f01b253a-c7c6-4c9e-a800-a1732ba06f37","Type":"ContainerDied","Data":"8b1af4c2f51b92e7d8e686eda8c46c518ebd4e4b694ff77617f39b7b376a484a"} Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.711104 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.808824 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data\") pod \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.809095 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle\") pod \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.809137 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc8q5\" (UniqueName: \"kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5\") pod \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.809158 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts\") pod \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\" (UID: \"f01b253a-c7c6-4c9e-a800-a1732ba06f37\") " Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.815022 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5" (OuterVolumeSpecName: "kube-api-access-zc8q5") pod "f01b253a-c7c6-4c9e-a800-a1732ba06f37" (UID: "f01b253a-c7c6-4c9e-a800-a1732ba06f37"). InnerVolumeSpecName "kube-api-access-zc8q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.815378 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts" (OuterVolumeSpecName: "scripts") pod "f01b253a-c7c6-4c9e-a800-a1732ba06f37" (UID: "f01b253a-c7c6-4c9e-a800-a1732ba06f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.847916 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f01b253a-c7c6-4c9e-a800-a1732ba06f37" (UID: "f01b253a-c7c6-4c9e-a800-a1732ba06f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.848948 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data" (OuterVolumeSpecName: "config-data") pod "f01b253a-c7c6-4c9e-a800-a1732ba06f37" (UID: "f01b253a-c7c6-4c9e-a800-a1732ba06f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.911722 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.911766 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.911781 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc8q5\" (UniqueName: \"kubernetes.io/projected/f01b253a-c7c6-4c9e-a800-a1732ba06f37-kube-api-access-zc8q5\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:05 crc kubenswrapper[4721]: I0202 13:26:05.911793 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f01b253a-c7c6-4c9e-a800-a1732ba06f37-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.287804 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9h27j" event={"ID":"f01b253a-c7c6-4c9e-a800-a1732ba06f37","Type":"ContainerDied","Data":"5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6"} Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.287834 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9h27j" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.287851 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.457490 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:26:06 crc kubenswrapper[4721]: E0202 13:26:06.458016 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="extract-content" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458032 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="extract-content" Feb 02 13:26:06 crc kubenswrapper[4721]: E0202 13:26:06.458052 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="extract-utilities" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458058 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="extract-utilities" Feb 02 13:26:06 crc kubenswrapper[4721]: E0202 13:26:06.458109 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="registry-server" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458116 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="registry-server" Feb 02 13:26:06 crc kubenswrapper[4721]: E0202 13:26:06.458131 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01b253a-c7c6-4c9e-a800-a1732ba06f37" containerName="nova-cell0-conductor-db-sync" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458137 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01b253a-c7c6-4c9e-a800-a1732ba06f37" containerName="nova-cell0-conductor-db-sync" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458341 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1487506-5263-4ffe-b3e0-1a7a507590f9" containerName="registry-server" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.458362 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01b253a-c7c6-4c9e-a800-a1732ba06f37" containerName="nova-cell0-conductor-db-sync" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.459171 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.461300 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq69d" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.461498 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.468426 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.626001 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb8t\" (UniqueName: \"kubernetes.io/projected/562aee22-e2a0-4706-b65a-7e9398823dec-kube-api-access-nrb8t\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.626475 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.626655 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.729571 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.729701 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.729902 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrb8t\" (UniqueName: \"kubernetes.io/projected/562aee22-e2a0-4706-b65a-7e9398823dec-kube-api-access-nrb8t\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.734789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.736223 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562aee22-e2a0-4706-b65a-7e9398823dec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.758600 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrb8t\" (UniqueName: \"kubernetes.io/projected/562aee22-e2a0-4706-b65a-7e9398823dec-kube-api-access-nrb8t\") pod \"nova-cell0-conductor-0\" (UID: \"562aee22-e2a0-4706-b65a-7e9398823dec\") " pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:06 crc kubenswrapper[4721]: I0202 13:26:06.798748 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:07 crc kubenswrapper[4721]: I0202 13:26:07.272492 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 13:26:07 crc kubenswrapper[4721]: I0202 13:26:07.298925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"562aee22-e2a0-4706-b65a-7e9398823dec","Type":"ContainerStarted","Data":"ffecfe3f218a2f41dffa98466ead7a510067b259e94cd90543b21b12af58ebc1"} Feb 02 13:26:08 crc kubenswrapper[4721]: I0202 13:26:08.309917 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"562aee22-e2a0-4706-b65a-7e9398823dec","Type":"ContainerStarted","Data":"fbd59eeadba9cd8dbe6ae843c75399a80ca2280eb621f91ea7107f191702cb00"} Feb 02 13:26:08 crc kubenswrapper[4721]: I0202 13:26:08.310290 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:08 crc kubenswrapper[4721]: I0202 13:26:08.338271 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.338253405 podStartE2EDuration="2.338253405s" podCreationTimestamp="2026-02-02 13:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:08.328982485 +0000 UTC m=+1508.631496894" watchObservedRunningTime="2026-02-02 13:26:08.338253405 +0000 UTC m=+1508.640767794" Feb 02 13:26:09 crc kubenswrapper[4721]: E0202 13:26:09.997135 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:10 crc kubenswrapper[4721]: E0202 13:26:10.053103 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.204296 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.307386 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.330973 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom\") pod \"bddd12fa-0653-4199-867f-bfdf51350b39\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.331032 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle\") pod \"bddd12fa-0653-4199-867f-bfdf51350b39\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.331108 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jwmm\" (UniqueName: \"kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm\") pod \"bddd12fa-0653-4199-867f-bfdf51350b39\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.331279 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data\") pod \"bddd12fa-0653-4199-867f-bfdf51350b39\" (UID: \"bddd12fa-0653-4199-867f-bfdf51350b39\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.340315 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bddd12fa-0653-4199-867f-bfdf51350b39" (UID: "bddd12fa-0653-4199-867f-bfdf51350b39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.351576 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm" (OuterVolumeSpecName: "kube-api-access-6jwmm") pod "bddd12fa-0653-4199-867f-bfdf51350b39" (UID: "bddd12fa-0653-4199-867f-bfdf51350b39"). InnerVolumeSpecName "kube-api-access-6jwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.398323 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerID="b559fdc4b6fca20276abf4dceba9979868b3f8f7cc76c9ba5d01cbaddb599222" exitCode=137 Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.398441 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerDied","Data":"b559fdc4b6fca20276abf4dceba9979868b3f8f7cc76c9ba5d01cbaddb599222"} Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.426603 4721 generic.go:334] "Generic (PLEG): container finished" podID="bddd12fa-0653-4199-867f-bfdf51350b39" containerID="c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75" exitCode=137 Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.438399 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.438956 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.441498 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jwmm\" (UniqueName: \"kubernetes.io/projected/bddd12fa-0653-4199-867f-bfdf51350b39-kube-api-access-6jwmm\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.486035 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bddd12fa-0653-4199-867f-bfdf51350b39" (UID: "bddd12fa-0653-4199-867f-bfdf51350b39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.495927 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" event={"ID":"bddd12fa-0653-4199-867f-bfdf51350b39","Type":"ContainerDied","Data":"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75"} Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.495988 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59bff7fb85-wq6q5" event={"ID":"bddd12fa-0653-4199-867f-bfdf51350b39","Type":"ContainerDied","Data":"0f9d2ecaa1c8c841d9801b80ea33e35c8c2a2c815bca71770000a6385ed7be15"} Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.496008 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.496290 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vl65" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="registry-server" containerID="cri-o://34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917" gracePeriod=2 Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.496413 4721 scope.go:117] "RemoveContainer" containerID="c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.541079 4721 scope.go:117] "RemoveContainer" containerID="c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75" Feb 02 13:26:10 crc kubenswrapper[4721]: E0202 13:26:10.544756 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75\": container with ID starting with c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75 not found: ID does not exist" containerID="c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.544805 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75"} err="failed to get container status \"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75\": rpc error: code = NotFound desc = could not find container \"c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75\": container with ID starting with c89c6bf878c74f41052b7958471a080d9fee687b5e9464975790c1897da44a75 not found: ID does not exist" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.546842 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.548953 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data" (OuterVolumeSpecName: "config-data") pod "bddd12fa-0653-4199-867f-bfdf51350b39" (UID: "bddd12fa-0653-4199-867f-bfdf51350b39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.644246 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.648921 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddd12fa-0653-4199-867f-bfdf51350b39-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750181 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750282 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85frw\" (UniqueName: \"kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750437 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750465 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750534 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750565 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.750669 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml\") pod \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\" (UID: \"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3\") " Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.752822 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.756321 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw" (OuterVolumeSpecName: "kube-api-access-85frw") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "kube-api-access-85frw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.756618 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts" (OuterVolumeSpecName: "scripts") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.757346 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.853851 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.859396 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.863234 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.863279 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.863299 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.863312 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.863323 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85frw\" (UniqueName: \"kubernetes.io/projected/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-kube-api-access-85frw\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.875188 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-59bff7fb85-wq6q5"] Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.967269 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:10 crc kubenswrapper[4721]: I0202 13:26:10.973387 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.073057 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data" (OuterVolumeSpecName: "config-data") pod "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" (UID: "d1170efb-7d4f-4c8b-84dd-6d86925bf5d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.075133 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.201681 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gq5tv"] Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.202691 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="sg-core" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.202843 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="sg-core" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.202934 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="proxy-httpd" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.203004 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="proxy-httpd" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.203089 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddd12fa-0653-4199-867f-bfdf51350b39" containerName="heat-cfnapi" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.203206 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddd12fa-0653-4199-867f-bfdf51350b39" containerName="heat-cfnapi" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.203305 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-central-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.203379 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-central-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.203491 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-notification-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.203566 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-notification-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.203910 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-central-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.204010 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddd12fa-0653-4199-867f-bfdf51350b39" containerName="heat-cfnapi" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.204120 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="sg-core" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.204233 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="ceilometer-notification-agent" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.204335 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" containerName="proxy-httpd" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.205503 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.245646 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gq5tv"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.254974 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.282378 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2c26-account-create-update-2r4tb"] Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.283615 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="extract-utilities" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.283729 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="extract-utilities" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.283838 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="registry-server" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.283932 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="registry-server" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.284040 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="extract-content" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.284186 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="extract-content" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.284576 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerName="registry-server" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.285540 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.291152 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.304762 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2c26-account-create-update-2r4tb"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.390520 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content\") pod \"c2db9546-309a-4e2a-8b57-e2986e6cb500\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.390995 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities\") pod \"c2db9546-309a-4e2a-8b57-e2986e6cb500\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.391203 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6ncw\" (UniqueName: \"kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw\") pod \"c2db9546-309a-4e2a-8b57-e2986e6cb500\" (UID: \"c2db9546-309a-4e2a-8b57-e2986e6cb500\") " Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.391721 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.391916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwrd\" (UniqueName: \"kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.392000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbwm\" (UniqueName: \"kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.392274 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.393464 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities" (OuterVolumeSpecName: "utilities") pod "c2db9546-309a-4e2a-8b57-e2986e6cb500" (UID: "c2db9546-309a-4e2a-8b57-e2986e6cb500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.399354 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw" (OuterVolumeSpecName: "kube-api-access-b6ncw") pod "c2db9546-309a-4e2a-8b57-e2986e6cb500" (UID: "c2db9546-309a-4e2a-8b57-e2986e6cb500"). InnerVolumeSpecName "kube-api-access-b6ncw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.455207 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2db9546-309a-4e2a-8b57-e2986e6cb500" (UID: "c2db9546-309a-4e2a-8b57-e2986e6cb500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.459471 4721 generic.go:334] "Generic (PLEG): container finished" podID="c2db9546-309a-4e2a-8b57-e2986e6cb500" containerID="34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917" exitCode=0 Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.459554 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerDied","Data":"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917"} Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.459587 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vl65" event={"ID":"c2db9546-309a-4e2a-8b57-e2986e6cb500","Type":"ContainerDied","Data":"ae96317dbbd9659e948a12834719912af10f2e2c9b3f171a99e46548715f116d"} Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.459607 4721 scope.go:117] "RemoveContainer" containerID="34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.459772 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vl65" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.464465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1170efb-7d4f-4c8b-84dd-6d86925bf5d3","Type":"ContainerDied","Data":"17422626d1aef368e4a89ff57fcd151611e206eb400b70c99ece56e2a2b12dbd"} Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.464658 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.483722 4721 scope.go:117] "RemoveContainer" containerID="1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494252 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494303 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwrd\" (UniqueName: \"kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494339 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbwm\" (UniqueName: \"kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494536 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494632 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494645 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2db9546-309a-4e2a-8b57-e2986e6cb500-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.494669 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6ncw\" (UniqueName: \"kubernetes.io/projected/c2db9546-309a-4e2a-8b57-e2986e6cb500-kube-api-access-b6ncw\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.496755 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.497182 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.523891 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbwm\" (UniqueName: \"kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm\") pod \"aodh-db-create-gq5tv\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.529773 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwrd\" (UniqueName: \"kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd\") pod \"aodh-2c26-account-create-update-2r4tb\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.546584 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.566324 4721 scope.go:117] "RemoveContainer" containerID="214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.567638 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.577825 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.604699 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.610302 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.611657 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.617825 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.618308 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.627552 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.631767 4721 scope.go:117] "RemoveContainer" containerID="34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.632274 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917\": container with ID starting with 34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917 not found: ID does not exist" containerID="34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.632324 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917"} err="failed to get container status \"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917\": rpc error: code = NotFound desc = could not find container \"34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917\": container with ID starting with 34f03665f21982d0db05900d73ae2a2969e1e103422293a6d050c2f0d4308917 not found: ID does not exist" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.632380 4721 scope.go:117] "RemoveContainer" containerID="1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.632789 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4\": container with ID starting with 1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4 not found: ID does not exist" containerID="1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.632831 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4"} err="failed to get container status \"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4\": rpc error: code = NotFound desc = could not find container \"1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4\": container with ID starting with 1d340a91edc1a2d06152b7bbe13ebe7e5ae745bad0fc295760f0755f827254e4 not found: ID does not exist" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.632888 4721 scope.go:117] "RemoveContainer" containerID="214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169" Feb 02 13:26:11 crc kubenswrapper[4721]: E0202 13:26:11.635421 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169\": container with ID starting with 214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169 not found: ID does not exist" containerID="214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.635494 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169"} err="failed to get container status \"214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169\": rpc error: code = NotFound desc = could not find container \"214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169\": container with ID starting with 214137d2119b0f131d16421f0c5475602028f01ed014db4a693d69884c3bf169 not found: ID does not exist" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.635545 4721 scope.go:117] "RemoveContainer" containerID="b559fdc4b6fca20276abf4dceba9979868b3f8f7cc76c9ba5d01cbaddb599222" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.643913 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vl65"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.658193 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.687902 4721 scope.go:117] "RemoveContainer" containerID="3b0302d825319ad93ac4b7a0c298d8bf5cb6775771000ecd18cf0462db227389" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.715757 4721 scope.go:117] "RemoveContainer" containerID="b0b5571eb117a42c85d555cc9f73ebf04bc2152bc212ed2d51da96cbfa4c0649" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.773850 4721 scope.go:117] "RemoveContainer" containerID="250d61da9c9aec53fa8403200cdb7bf07e9fd015ad372194d53e43f3edd63adb" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.801946 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802035 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802082 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802127 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802165 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802196 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqf6\" (UniqueName: \"kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.802275 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.908025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqf6\" (UniqueName: \"kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.908250 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.908310 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.908948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.909099 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.909154 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.909198 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.915465 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.915885 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.916225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.917175 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.921274 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.921971 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.926777 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqf6\" (UniqueName: \"kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6\") pod \"ceilometer-0\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " pod="openstack/ceilometer-0" Feb 02 13:26:11 crc kubenswrapper[4721]: I0202 13:26:11.935276 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:12 crc kubenswrapper[4721]: W0202 13:26:12.156315 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad02c2c3_e07d_4ab9_8498_26e3c2bfdfb9.slice/crio-db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0 WatchSource:0}: Error finding container db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0: Status 404 returned error can't find the container with id db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0 Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.158643 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gq5tv"] Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.322716 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2c26-account-create-update-2r4tb"] Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.433759 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddd12fa-0653-4199-867f-bfdf51350b39" path="/var/lib/kubelet/pods/bddd12fa-0653-4199-867f-bfdf51350b39/volumes" Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.436306 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2db9546-309a-4e2a-8b57-e2986e6cb500" path="/var/lib/kubelet/pods/c2db9546-309a-4e2a-8b57-e2986e6cb500/volumes" Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.438198 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1170efb-7d4f-4c8b-84dd-6d86925bf5d3" path="/var/lib/kubelet/pods/d1170efb-7d4f-4c8b-84dd-6d86925bf5d3/volumes" Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.504798 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gq5tv" event={"ID":"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9","Type":"ContainerStarted","Data":"2995f84a212135f6e3822d895963dca122d6d439797b7dcab3e6c3486dd7be70"} Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.505041 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gq5tv" event={"ID":"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9","Type":"ContainerStarted","Data":"db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0"} Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.510126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c26-account-create-update-2r4tb" event={"ID":"997707ef-4296-4151-9385-0fbb48b5e317","Type":"ContainerStarted","Data":"30f0205f517a6c8c56e5656abaa390af0c771c20a778135ec582e019bcd70c1e"} Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.527048 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:12 crc kubenswrapper[4721]: I0202 13:26:12.541633 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-gq5tv" podStartSLOduration=1.541607474 podStartE2EDuration="1.541607474s" podCreationTimestamp="2026-02-02 13:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:12.526525126 +0000 UTC m=+1512.829039515" watchObservedRunningTime="2026-02-02 13:26:12.541607474 +0000 UTC m=+1512.844121873" Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.528658 4721 generic.go:334] "Generic (PLEG): container finished" podID="ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" containerID="2995f84a212135f6e3822d895963dca122d6d439797b7dcab3e6c3486dd7be70" exitCode=0 Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.528702 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gq5tv" event={"ID":"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9","Type":"ContainerDied","Data":"2995f84a212135f6e3822d895963dca122d6d439797b7dcab3e6c3486dd7be70"} Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.536838 4721 generic.go:334] "Generic (PLEG): container finished" podID="997707ef-4296-4151-9385-0fbb48b5e317" containerID="1857f80c5a621ba0a1377e0d92d2bafe33fbe9b8df8cc9bde107743c0bafe96d" exitCode=0 Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.536950 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c26-account-create-update-2r4tb" event={"ID":"997707ef-4296-4151-9385-0fbb48b5e317","Type":"ContainerDied","Data":"1857f80c5a621ba0a1377e0d92d2bafe33fbe9b8df8cc9bde107743c0bafe96d"} Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.540850 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerStarted","Data":"941cea547a9c6139f4cd34b3cba1a9232469327b41a70627da4554d99e83c28b"} Feb 02 13:26:13 crc kubenswrapper[4721]: I0202 13:26:13.540900 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerStarted","Data":"72f8e4545197a36201cfdd4ca1128b7b9319c774b1a3e5c0ba44e89faa242906"} Feb 02 13:26:14 crc kubenswrapper[4721]: I0202 13:26:14.556552 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerStarted","Data":"3fc0efffdb16822dab9442b0b34ece997fbe26ebe5808aadd9adb66a693a7dd5"} Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.136841 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.147245 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.323464 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcbwm\" (UniqueName: \"kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm\") pod \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.323538 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwrd\" (UniqueName: \"kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd\") pod \"997707ef-4296-4151-9385-0fbb48b5e317\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.323595 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts\") pod \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\" (UID: \"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9\") " Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.323696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts\") pod \"997707ef-4296-4151-9385-0fbb48b5e317\" (UID: \"997707ef-4296-4151-9385-0fbb48b5e317\") " Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.324527 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "997707ef-4296-4151-9385-0fbb48b5e317" (UID: "997707ef-4296-4151-9385-0fbb48b5e317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.324549 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" (UID: "ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.342059 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm" (OuterVolumeSpecName: "kube-api-access-jcbwm") pod "ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" (UID: "ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9"). InnerVolumeSpecName "kube-api-access-jcbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.342135 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd" (OuterVolumeSpecName: "kube-api-access-njwrd") pod "997707ef-4296-4151-9385-0fbb48b5e317" (UID: "997707ef-4296-4151-9385-0fbb48b5e317"). InnerVolumeSpecName "kube-api-access-njwrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.426915 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcbwm\" (UniqueName: \"kubernetes.io/projected/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-kube-api-access-jcbwm\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.426950 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwrd\" (UniqueName: \"kubernetes.io/projected/997707ef-4296-4151-9385-0fbb48b5e317-kube-api-access-njwrd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.426962 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.426972 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997707ef-4296-4151-9385-0fbb48b5e317-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.570330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerStarted","Data":"2e794fb911c91ba362224447e9ecd49d098fbbb2f1fb15a45aedc118537561bc"} Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.572380 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gq5tv" event={"ID":"ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9","Type":"ContainerDied","Data":"db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0"} Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.572412 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db646f49ac24673585383b65fa497f105bd4cdfb471374da88118daa7739bca0" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.572457 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gq5tv" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.574680 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2c26-account-create-update-2r4tb" event={"ID":"997707ef-4296-4151-9385-0fbb48b5e317","Type":"ContainerDied","Data":"30f0205f517a6c8c56e5656abaa390af0c771c20a778135ec582e019bcd70c1e"} Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.574724 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f0205f517a6c8c56e5656abaa390af0c771c20a778135ec582e019bcd70c1e" Feb 02 13:26:15 crc kubenswrapper[4721]: I0202 13:26:15.574749 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2c26-account-create-update-2r4tb" Feb 02 13:26:16 crc kubenswrapper[4721]: I0202 13:26:16.851424 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.611828 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerStarted","Data":"4b856042309c84ea0a6e8c400ccf627a4542f6766de95577e1154ef8996d41d0"} Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.612216 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.642704 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.487876929 podStartE2EDuration="6.642673681s" podCreationTimestamp="2026-02-02 13:26:11 +0000 UTC" firstStartedPulling="2026-02-02 13:26:12.529512497 +0000 UTC m=+1512.832026886" lastFinishedPulling="2026-02-02 13:26:16.684309249 +0000 UTC m=+1516.986823638" observedRunningTime="2026-02-02 13:26:17.635704292 +0000 UTC m=+1517.938218701" watchObservedRunningTime="2026-02-02 13:26:17.642673681 +0000 UTC m=+1517.945188070" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.744915 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4rm59"] Feb 02 13:26:17 crc kubenswrapper[4721]: E0202 13:26:17.745798 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" containerName="mariadb-database-create" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.745818 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" containerName="mariadb-database-create" Feb 02 13:26:17 crc kubenswrapper[4721]: E0202 13:26:17.745830 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997707ef-4296-4151-9385-0fbb48b5e317" containerName="mariadb-account-create-update" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.745837 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="997707ef-4296-4151-9385-0fbb48b5e317" containerName="mariadb-account-create-update" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.746136 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" containerName="mariadb-database-create" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.746165 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="997707ef-4296-4151-9385-0fbb48b5e317" containerName="mariadb-account-create-update" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.747080 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.750768 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.752231 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.767877 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4rm59"] Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.889956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.890033 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbf6\" (UniqueName: \"kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.890151 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.890269 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.959354 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.960841 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.965728 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.995048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.995191 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.995242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbf6\" (UniqueName: \"kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:17 crc kubenswrapper[4721]: I0202 13:26:17.995308 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.005209 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.010720 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.023275 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.027837 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.042265 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbf6\" (UniqueName: \"kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6\") pod \"nova-cell0-cell-mapping-4rm59\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.042414 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.059940 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.068992 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.075780 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.079729 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.097742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.097958 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8l7\" (UniqueName: \"kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.098208 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.117660 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.119301 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.131591 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.140316 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.188765 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.191145 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.199472 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.200819 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.200887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.200913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.200952 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.201000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qfc\" (UniqueName: \"kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.201115 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8l7\" (UniqueName: \"kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.201201 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.208897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.212641 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.245437 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8l7\" (UniqueName: \"kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7\") pod \"nova-cell1-novncproxy-0\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.248457 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.251053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.265150 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.289907 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305604 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305673 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpktt\" (UniqueName: \"kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4h5x\" (UniqueName: \"kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305799 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305853 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.305946 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306001 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306058 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306573 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306692 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5qfc\" (UniqueName: \"kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.306785 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.312141 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.314295 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.319790 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.337662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5qfc\" (UniqueName: \"kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc\") pod \"nova-metadata-0\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.343043 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408632 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4h5x\" (UniqueName: \"kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408687 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clh4b\" (UniqueName: \"kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408724 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408774 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408814 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.408993 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.409013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.409042 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.409062 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.409114 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpktt\" (UniqueName: \"kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.409141 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.421538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.421879 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.427324 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.433627 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.434829 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.451784 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4h5x\" (UniqueName: \"kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x\") pod \"nova-scheduler-0\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.452157 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpktt\" (UniqueName: \"kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt\") pod \"nova-api-0\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.513334 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.513866 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.513948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.514001 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.514100 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clh4b\" (UniqueName: \"kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.514144 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.516315 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.516873 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.519891 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.525867 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.534835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.540816 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clh4b\" (UniqueName: \"kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b\") pod \"dnsmasq-dns-9b86998b5-pclnt\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.679734 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.699913 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.735506 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:18 crc kubenswrapper[4721]: I0202 13:26:18.895387 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4rm59"] Feb 02 13:26:18 crc kubenswrapper[4721]: W0202 13:26:18.920576 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod798dac79_94bd_4655_b409_4b173956cdbf.slice/crio-f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203 WatchSource:0}: Error finding container f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203: Status 404 returned error can't find the container with id f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203 Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.181901 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:19 crc kubenswrapper[4721]: W0202 13:26:19.185354 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c1a06d_626e_414f_9fa0_a09e68349ffa.slice/crio-4979432985debe9a9103a70638cdc02af7e7ff0e27c39cb5362f33e117dad874 WatchSource:0}: Error finding container 4979432985debe9a9103a70638cdc02af7e7ff0e27c39cb5362f33e117dad874: Status 404 returned error can't find the container with id 4979432985debe9a9103a70638cdc02af7e7ff0e27c39cb5362f33e117dad874 Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.205848 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.262458 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hclll"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.267605 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.278361 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.278564 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.283936 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hclll"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.355116 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdcgd\" (UniqueName: \"kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.355801 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.356061 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.356926 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: W0202 13:26:19.437568 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6deee91e_3b9b_46a0_a05e_613827b42808.slice/crio-6be24bc44a924556f23ce7d2ad20d369ad32da8a65f3765a7bf2193db6ff3cfc WatchSource:0}: Error finding container 6be24bc44a924556f23ce7d2ad20d369ad32da8a65f3765a7bf2193db6ff3cfc: Status 404 returned error can't find the container with id 6be24bc44a924556f23ce7d2ad20d369ad32da8a65f3765a7bf2193db6ff3cfc Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.440627 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.459458 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.459710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.459868 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdcgd\" (UniqueName: \"kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.459970 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.504566 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.508748 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdcgd\" (UniqueName: \"kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.512554 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.524760 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hclll\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.634658 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.659701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4rm59" event={"ID":"798dac79-94bd-4655-b409-4b173956cdbf","Type":"ContainerStarted","Data":"986695a877ff6ee1eff46d199a1d9a03005ab3f7a3f83da3d9212653e0413f7c"} Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.659757 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4rm59" event={"ID":"798dac79-94bd-4655-b409-4b173956cdbf","Type":"ContainerStarted","Data":"f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203"} Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.662642 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.666267 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerStarted","Data":"4979432985debe9a9103a70638cdc02af7e7ff0e27c39cb5362f33e117dad874"} Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.691657 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e","Type":"ContainerStarted","Data":"b40b44f42b58694b1e7abb93930ae837bfaf27b3b7a9cd3931ed69ef1a81d994"} Feb 02 13:26:19 crc kubenswrapper[4721]: W0202 13:26:19.696578 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87e6edb_4947_41a9_b95c_5120f9b4dbdc.slice/crio-5099999a072997f647e2393c3c5cb7d2e07ed8a533074bc15ca483b0989cce5e WatchSource:0}: Error finding container 5099999a072997f647e2393c3c5cb7d2e07ed8a533074bc15ca483b0989cce5e: Status 404 returned error can't find the container with id 5099999a072997f647e2393c3c5cb7d2e07ed8a533074bc15ca483b0989cce5e Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.698780 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.701849 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6deee91e-3b9b-46a0-a05e-613827b42808","Type":"ContainerStarted","Data":"6be24bc44a924556f23ce7d2ad20d369ad32da8a65f3765a7bf2193db6ff3cfc"} Feb 02 13:26:19 crc kubenswrapper[4721]: I0202 13:26:19.715885 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4rm59" podStartSLOduration=2.7158637370000003 podStartE2EDuration="2.715863737s" podCreationTimestamp="2026-02-02 13:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:19.692985367 +0000 UTC m=+1519.995499756" watchObservedRunningTime="2026-02-02 13:26:19.715863737 +0000 UTC m=+1520.018378136" Feb 02 13:26:20 crc kubenswrapper[4721]: E0202 13:26:20.582666 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.585494 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hclll"] Feb 02 13:26:20 crc kubenswrapper[4721]: W0202 13:26:20.614654 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d38541e_139a_425e_a7bd_f7c484f7266b.slice/crio-f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb WatchSource:0}: Error finding container f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb: Status 404 returned error can't find the container with id f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.724232 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerStarted","Data":"dda040b5b2dc6f664a770c8e322b217a952cd150043f7182ef1730acf2437842"} Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.731827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hclll" event={"ID":"6d38541e-139a-425e-a7bd-f7c484f7266b","Type":"ContainerStarted","Data":"f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb"} Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.735346 4721 generic.go:334] "Generic (PLEG): container finished" podID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerID="4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c" exitCode=0 Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.736558 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" event={"ID":"b87e6edb-4947-41a9-b95c-5120f9b4dbdc","Type":"ContainerDied","Data":"4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c"} Feb 02 13:26:20 crc kubenswrapper[4721]: I0202 13:26:20.736587 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" event={"ID":"b87e6edb-4947-41a9-b95c-5120f9b4dbdc","Type":"ContainerStarted","Data":"5099999a072997f647e2393c3c5cb7d2e07ed8a533074bc15ca483b0989cce5e"} Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.762048 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hclll" event={"ID":"6d38541e-139a-425e-a7bd-f7c484f7266b","Type":"ContainerStarted","Data":"7eef54336f3dd0e3ab875f1a48960393b687a8f714292e0019c4853d4094a4be"} Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.780724 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" event={"ID":"b87e6edb-4947-41a9-b95c-5120f9b4dbdc","Type":"ContainerStarted","Data":"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d"} Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.789943 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.790257 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hclll" podStartSLOduration=2.790234957 podStartE2EDuration="2.790234957s" podCreationTimestamp="2026-02-02 13:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:21.788107878 +0000 UTC m=+1522.090622287" watchObservedRunningTime="2026-02-02 13:26:21.790234957 +0000 UTC m=+1522.092749356" Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.822488 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" podStartSLOduration=3.822468641 podStartE2EDuration="3.822468641s" podCreationTimestamp="2026-02-02 13:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:21.814060113 +0000 UTC m=+1522.116574502" watchObservedRunningTime="2026-02-02 13:26:21.822468641 +0000 UTC m=+1522.124983030" Feb 02 13:26:21 crc kubenswrapper[4721]: I0202 13:26:21.983754 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:22 crc kubenswrapper[4721]: I0202 13:26:22.001864 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.830467 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e","Type":"ContainerStarted","Data":"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a"} Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.830965 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a" gracePeriod=30 Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.858798 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6deee91e-3b9b-46a0-a05e-613827b42808","Type":"ContainerStarted","Data":"3d9337e6ad6b91226e1d6dffcacca3ebd66fbbc0402ee3e8f5821c0fb854a95c"} Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.876127 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerStarted","Data":"e678d0355031514e26c0e876e17ad1d6766f6e13bf506ad8e8a465663e92ced1"} Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.880465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerStarted","Data":"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449"} Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.893250 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.758532675 podStartE2EDuration="7.893232543s" podCreationTimestamp="2026-02-02 13:26:17 +0000 UTC" firstStartedPulling="2026-02-02 13:26:19.22370166 +0000 UTC m=+1519.526216049" lastFinishedPulling="2026-02-02 13:26:24.358401528 +0000 UTC m=+1524.660915917" observedRunningTime="2026-02-02 13:26:24.85036075 +0000 UTC m=+1525.152875139" watchObservedRunningTime="2026-02-02 13:26:24.893232543 +0000 UTC m=+1525.195746932" Feb 02 13:26:24 crc kubenswrapper[4721]: I0202 13:26:24.897266 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.006580082 podStartE2EDuration="7.897250852s" podCreationTimestamp="2026-02-02 13:26:17 +0000 UTC" firstStartedPulling="2026-02-02 13:26:19.467743098 +0000 UTC m=+1519.770257487" lastFinishedPulling="2026-02-02 13:26:24.358413868 +0000 UTC m=+1524.660928257" observedRunningTime="2026-02-02 13:26:24.875872932 +0000 UTC m=+1525.178387331" watchObservedRunningTime="2026-02-02 13:26:24.897250852 +0000 UTC m=+1525.199765241" Feb 02 13:26:25 crc kubenswrapper[4721]: E0202 13:26:25.054849 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.893972 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerStarted","Data":"187aece21bb2425b5bf5ce341d27bac916d46fddbd34aeac0734b4f84ad64e63"} Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.895899 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerStarted","Data":"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f"} Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.896134 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-log" containerID="cri-o://d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" gracePeriod=30 Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.896168 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-metadata" containerID="cri-o://26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" gracePeriod=30 Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.921726 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.22140097 podStartE2EDuration="7.921706396s" podCreationTimestamp="2026-02-02 13:26:18 +0000 UTC" firstStartedPulling="2026-02-02 13:26:19.656692423 +0000 UTC m=+1519.959206812" lastFinishedPulling="2026-02-02 13:26:24.356997849 +0000 UTC m=+1524.659512238" observedRunningTime="2026-02-02 13:26:25.91337204 +0000 UTC m=+1526.215886429" watchObservedRunningTime="2026-02-02 13:26:25.921706396 +0000 UTC m=+1526.224220775" Feb 02 13:26:25 crc kubenswrapper[4721]: I0202 13:26:25.949243 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.787586723 podStartE2EDuration="8.949224692s" podCreationTimestamp="2026-02-02 13:26:17 +0000 UTC" firstStartedPulling="2026-02-02 13:26:19.217013288 +0000 UTC m=+1519.519527677" lastFinishedPulling="2026-02-02 13:26:24.378651237 +0000 UTC m=+1524.681165646" observedRunningTime="2026-02-02 13:26:25.937789652 +0000 UTC m=+1526.240304041" watchObservedRunningTime="2026-02-02 13:26:25.949224692 +0000 UTC m=+1526.251739081" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.572027 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.605706 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs\") pod \"16c1a06d-626e-414f-9fa0-a09e68349ffa\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.605753 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5qfc\" (UniqueName: \"kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc\") pod \"16c1a06d-626e-414f-9fa0-a09e68349ffa\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.605808 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle\") pod \"16c1a06d-626e-414f-9fa0-a09e68349ffa\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.605876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") pod \"16c1a06d-626e-414f-9fa0-a09e68349ffa\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.607660 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs" (OuterVolumeSpecName: "logs") pod "16c1a06d-626e-414f-9fa0-a09e68349ffa" (UID: "16c1a06d-626e-414f-9fa0-a09e68349ffa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.616960 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc" (OuterVolumeSpecName: "kube-api-access-d5qfc") pod "16c1a06d-626e-414f-9fa0-a09e68349ffa" (UID: "16c1a06d-626e-414f-9fa0-a09e68349ffa"). InnerVolumeSpecName "kube-api-access-d5qfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:26 crc kubenswrapper[4721]: E0202 13:26:26.644362 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data podName:16c1a06d-626e-414f-9fa0-a09e68349ffa nodeName:}" failed. No retries permitted until 2026-02-02 13:26:27.144334045 +0000 UTC m=+1527.446848434 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data") pod "16c1a06d-626e-414f-9fa0-a09e68349ffa" (UID: "16c1a06d-626e-414f-9fa0-a09e68349ffa") : error deleting /var/lib/kubelet/pods/16c1a06d-626e-414f-9fa0-a09e68349ffa/volume-subpaths: remove /var/lib/kubelet/pods/16c1a06d-626e-414f-9fa0-a09e68349ffa/volume-subpaths: no such file or directory Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.647365 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c1a06d-626e-414f-9fa0-a09e68349ffa" (UID: "16c1a06d-626e-414f-9fa0-a09e68349ffa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.709608 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.709652 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c1a06d-626e-414f-9fa0-a09e68349ffa-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.709671 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5qfc\" (UniqueName: \"kubernetes.io/projected/16c1a06d-626e-414f-9fa0-a09e68349ffa-kube-api-access-d5qfc\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910092 4721 generic.go:334] "Generic (PLEG): container finished" podID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerID="26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" exitCode=0 Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910127 4721 generic.go:334] "Generic (PLEG): container finished" podID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerID="d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" exitCode=143 Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910177 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerDied","Data":"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f"} Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910860 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerDied","Data":"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449"} Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910874 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"16c1a06d-626e-414f-9fa0-a09e68349ffa","Type":"ContainerDied","Data":"4979432985debe9a9103a70638cdc02af7e7ff0e27c39cb5362f33e117dad874"} Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.910892 4721 scope.go:117] "RemoveContainer" containerID="26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.952790 4721 scope.go:117] "RemoveContainer" containerID="d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.982013 4721 scope.go:117] "RemoveContainer" containerID="26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" Feb 02 13:26:26 crc kubenswrapper[4721]: E0202 13:26:26.982641 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f\": container with ID starting with 26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f not found: ID does not exist" containerID="26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.982676 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f"} err="failed to get container status \"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f\": rpc error: code = NotFound desc = could not find container \"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f\": container with ID starting with 26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f not found: ID does not exist" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.982703 4721 scope.go:117] "RemoveContainer" containerID="d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" Feb 02 13:26:26 crc kubenswrapper[4721]: E0202 13:26:26.983147 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449\": container with ID starting with d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449 not found: ID does not exist" containerID="d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.983169 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449"} err="failed to get container status \"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449\": rpc error: code = NotFound desc = could not find container \"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449\": container with ID starting with d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449 not found: ID does not exist" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.983181 4721 scope.go:117] "RemoveContainer" containerID="26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.983556 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f"} err="failed to get container status \"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f\": rpc error: code = NotFound desc = could not find container \"26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f\": container with ID starting with 26bfcaea73d1b0a0e727ff022ecb290ea5ebc0e49635be31a814dbc20238e47f not found: ID does not exist" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.983574 4721 scope.go:117] "RemoveContainer" containerID="d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449" Feb 02 13:26:26 crc kubenswrapper[4721]: I0202 13:26:26.983862 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449"} err="failed to get container status \"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449\": rpc error: code = NotFound desc = could not find container \"d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449\": container with ID starting with d2a55d0b43fdef668c43d93ef586abd1a401dd6f4301e561f85ee5cbf104d449 not found: ID does not exist" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.219201 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") pod \"16c1a06d-626e-414f-9fa0-a09e68349ffa\" (UID: \"16c1a06d-626e-414f-9fa0-a09e68349ffa\") " Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.223390 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data" (OuterVolumeSpecName: "config-data") pod "16c1a06d-626e-414f-9fa0-a09e68349ffa" (UID: "16c1a06d-626e-414f-9fa0-a09e68349ffa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.337327 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c1a06d-626e-414f-9fa0-a09e68349ffa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.554346 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.571595 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.589362 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:27 crc kubenswrapper[4721]: E0202 13:26:27.589893 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-log" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.589912 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-log" Feb 02 13:26:27 crc kubenswrapper[4721]: E0202 13:26:27.589956 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-metadata" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.589963 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-metadata" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.590191 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-metadata" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.590212 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" containerName="nova-metadata-log" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.591454 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.594695 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.597712 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.646837 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.649970 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.650193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.650302 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.650499 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxlz\" (UniqueName: \"kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.650677 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.752021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxlz\" (UniqueName: \"kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.752151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.752249 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.752286 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.752322 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.753644 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.758091 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.758695 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.759091 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.773271 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxlz\" (UniqueName: \"kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz\") pod \"nova-metadata-0\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " pod="openstack/nova-metadata-0" Feb 02 13:26:27 crc kubenswrapper[4721]: I0202 13:26:27.924312 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.291126 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.432714 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c1a06d-626e-414f-9fa0-a09e68349ffa" path="/var/lib/kubelet/pods/16c1a06d-626e-414f-9fa0-a09e68349ffa/volumes" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.434098 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:28 crc kubenswrapper[4721]: W0202 13:26:28.442143 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07ae1ad9_237c_4b8f_acdf_ba750cc6316e.slice/crio-401c3ded565d56700cb511af03efc19368dafbf26ff256c1b27d75fad4eec09d WatchSource:0}: Error finding container 401c3ded565d56700cb511af03efc19368dafbf26ff256c1b27d75fad4eec09d: Status 404 returned error can't find the container with id 401c3ded565d56700cb511af03efc19368dafbf26ff256c1b27d75fad4eec09d Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.681344 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.681421 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.700607 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.700663 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.716605 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.742291 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.844544 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.845150 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="dnsmasq-dns" containerID="cri-o://02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088" gracePeriod=10 Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.943668 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerStarted","Data":"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d"} Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.945234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerStarted","Data":"401c3ded565d56700cb511af03efc19368dafbf26ff256c1b27d75fad4eec09d"} Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.947193 4721 generic.go:334] "Generic (PLEG): container finished" podID="798dac79-94bd-4655-b409-4b173956cdbf" containerID="986695a877ff6ee1eff46d199a1d9a03005ab3f7a3f83da3d9212653e0413f7c" exitCode=0 Feb 02 13:26:28 crc kubenswrapper[4721]: I0202 13:26:28.947429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4rm59" event={"ID":"798dac79-94bd-4655-b409-4b173956cdbf","Type":"ContainerDied","Data":"986695a877ff6ee1eff46d199a1d9a03005ab3f7a3f83da3d9212653e0413f7c"} Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.011746 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.553988 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.599242 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.599348 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.599512 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crj69\" (UniqueName: \"kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.600106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.600167 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.600528 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc\") pod \"a217ca40-3638-474b-b739-cb8784823fa6\" (UID: \"a217ca40-3638-474b-b739-cb8784823fa6\") " Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.616807 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69" (OuterVolumeSpecName: "kube-api-access-crj69") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "kube-api-access-crj69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.684976 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config" (OuterVolumeSpecName: "config") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.701807 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.702105 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.703977 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.704011 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.704026 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.704038 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crj69\" (UniqueName: \"kubernetes.io/projected/a217ca40-3638-474b-b739-cb8784823fa6-kube-api-access-crj69\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.728225 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.749201 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a217ca40-3638-474b-b739-cb8784823fa6" (UID: "a217ca40-3638-474b-b739-cb8784823fa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.783344 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.783682 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.818428 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.818467 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a217ca40-3638-474b-b739-cb8784823fa6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.967343 4721 generic.go:334] "Generic (PLEG): container finished" podID="6d38541e-139a-425e-a7bd-f7c484f7266b" containerID="7eef54336f3dd0e3ab875f1a48960393b687a8f714292e0019c4853d4094a4be" exitCode=0 Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.967408 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hclll" event={"ID":"6d38541e-139a-425e-a7bd-f7c484f7266b","Type":"ContainerDied","Data":"7eef54336f3dd0e3ab875f1a48960393b687a8f714292e0019c4853d4094a4be"} Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.971542 4721 generic.go:334] "Generic (PLEG): container finished" podID="a217ca40-3638-474b-b739-cb8784823fa6" containerID="02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088" exitCode=0 Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.971620 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" event={"ID":"a217ca40-3638-474b-b739-cb8784823fa6","Type":"ContainerDied","Data":"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088"} Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.971649 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" event={"ID":"a217ca40-3638-474b-b739-cb8784823fa6","Type":"ContainerDied","Data":"855aab218dced6a9a20cd36dee1f3e920c647b6da54cc409503c35b4f9458f8e"} Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.971646 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.971668 4721 scope.go:117] "RemoveContainer" containerID="02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088" Feb 02 13:26:29 crc kubenswrapper[4721]: I0202 13:26:29.980651 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerStarted","Data":"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4"} Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.011521 4721 scope.go:117] "RemoveContainer" containerID="bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.027322 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.027299023 podStartE2EDuration="3.027299023s" podCreationTimestamp="2026-02-02 13:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:30.018482864 +0000 UTC m=+1530.320997253" watchObservedRunningTime="2026-02-02 13:26:30.027299023 +0000 UTC m=+1530.329813412" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.056222 4721 scope.go:117] "RemoveContainer" containerID="02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088" Feb 02 13:26:30 crc kubenswrapper[4721]: E0202 13:26:30.060211 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088\": container with ID starting with 02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088 not found: ID does not exist" containerID="02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.060263 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088"} err="failed to get container status \"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088\": rpc error: code = NotFound desc = could not find container \"02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088\": container with ID starting with 02eb0b572a9d7db26d91ed79b8f1f7e0e781ae5e0ac90b394b51265b1c190088 not found: ID does not exist" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.060297 4721 scope.go:117] "RemoveContainer" containerID="bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed" Feb 02 13:26:30 crc kubenswrapper[4721]: E0202 13:26:30.064148 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed\": container with ID starting with bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed not found: ID does not exist" containerID="bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.064181 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed"} err="failed to get container status \"bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed\": rpc error: code = NotFound desc = could not find container \"bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed\": container with ID starting with bfd05ac10c25cef470159320ac73442924479047b666e7ec3809106d9b26e0ed not found: ID does not exist" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.070124 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.079938 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-m87tn"] Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.433818 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a217ca40-3638-474b-b739-cb8784823fa6" path="/var/lib/kubelet/pods/a217ca40-3638-474b-b739-cb8784823fa6/volumes" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.542322 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.639497 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts\") pod \"798dac79-94bd-4655-b409-4b173956cdbf\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.639741 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlbf6\" (UniqueName: \"kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6\") pod \"798dac79-94bd-4655-b409-4b173956cdbf\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.639791 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle\") pod \"798dac79-94bd-4655-b409-4b173956cdbf\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.639836 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data\") pod \"798dac79-94bd-4655-b409-4b173956cdbf\" (UID: \"798dac79-94bd-4655-b409-4b173956cdbf\") " Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.663647 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6" (OuterVolumeSpecName: "kube-api-access-mlbf6") pod "798dac79-94bd-4655-b409-4b173956cdbf" (UID: "798dac79-94bd-4655-b409-4b173956cdbf"). InnerVolumeSpecName "kube-api-access-mlbf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.665198 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts" (OuterVolumeSpecName: "scripts") pod "798dac79-94bd-4655-b409-4b173956cdbf" (UID: "798dac79-94bd-4655-b409-4b173956cdbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.681319 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data" (OuterVolumeSpecName: "config-data") pod "798dac79-94bd-4655-b409-4b173956cdbf" (UID: "798dac79-94bd-4655-b409-4b173956cdbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.687173 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798dac79-94bd-4655-b409-4b173956cdbf" (UID: "798dac79-94bd-4655-b409-4b173956cdbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.742541 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlbf6\" (UniqueName: \"kubernetes.io/projected/798dac79-94bd-4655-b409-4b173956cdbf-kube-api-access-mlbf6\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.742568 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.742577 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:30 crc kubenswrapper[4721]: I0202 13:26:30.742585 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798dac79-94bd-4655-b409-4b173956cdbf-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.007781 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4rm59" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.007924 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4rm59" event={"ID":"798dac79-94bd-4655-b409-4b173956cdbf","Type":"ContainerDied","Data":"f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203"} Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.007978 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f420768dcd70c2968aba8188e47922f2bb1ac5868d1594e6d772fbe971edf203" Feb 02 13:26:31 crc kubenswrapper[4721]: E0202 13:26:31.024801 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.190613 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.191200 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-log" containerID="cri-o://e678d0355031514e26c0e876e17ad1d6766f6e13bf506ad8e8a465663e92ced1" gracePeriod=30 Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.191722 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-api" containerID="cri-o://187aece21bb2425b5bf5ce341d27bac916d46fddbd34aeac0734b4f84ad64e63" gracePeriod=30 Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.221238 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.221413 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6deee91e-3b9b-46a0-a05e-613827b42808" containerName="nova-scheduler-scheduler" containerID="cri-o://3d9337e6ad6b91226e1d6dffcacca3ebd66fbbc0402ee3e8f5821c0fb854a95c" gracePeriod=30 Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.260259 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.666850 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.681317 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle\") pod \"6d38541e-139a-425e-a7bd-f7c484f7266b\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.681530 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data\") pod \"6d38541e-139a-425e-a7bd-f7c484f7266b\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.681778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts\") pod \"6d38541e-139a-425e-a7bd-f7c484f7266b\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.681833 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdcgd\" (UniqueName: \"kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd\") pod \"6d38541e-139a-425e-a7bd-f7c484f7266b\" (UID: \"6d38541e-139a-425e-a7bd-f7c484f7266b\") " Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.696251 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts" (OuterVolumeSpecName: "scripts") pod "6d38541e-139a-425e-a7bd-f7c484f7266b" (UID: "6d38541e-139a-425e-a7bd-f7c484f7266b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.710282 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd" (OuterVolumeSpecName: "kube-api-access-fdcgd") pod "6d38541e-139a-425e-a7bd-f7c484f7266b" (UID: "6d38541e-139a-425e-a7bd-f7c484f7266b"). InnerVolumeSpecName "kube-api-access-fdcgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.776231 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data" (OuterVolumeSpecName: "config-data") pod "6d38541e-139a-425e-a7bd-f7c484f7266b" (UID: "6d38541e-139a-425e-a7bd-f7c484f7266b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.784339 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.784371 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.784384 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdcgd\" (UniqueName: \"kubernetes.io/projected/6d38541e-139a-425e-a7bd-f7c484f7266b-kube-api-access-fdcgd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.802203 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d38541e-139a-425e-a7bd-f7c484f7266b" (UID: "6d38541e-139a-425e-a7bd-f7c484f7266b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:31 crc kubenswrapper[4721]: I0202 13:26:31.887355 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d38541e-139a-425e-a7bd-f7c484f7266b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.039261 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hclll" event={"ID":"6d38541e-139a-425e-a7bd-f7c484f7266b","Type":"ContainerDied","Data":"f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb"} Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.039297 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hclll" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.039307 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16e0687e4c95545bad8ad3d8adc326046486898b2e5a9a157138600b93876bb" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.043195 4721 generic.go:334] "Generic (PLEG): container finished" podID="8715f5a7-97a1-496d-be28-13c326b54135" containerID="e678d0355031514e26c0e876e17ad1d6766f6e13bf506ad8e8a465663e92ced1" exitCode=143 Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.043306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerDied","Data":"e678d0355031514e26c0e876e17ad1d6766f6e13bf506ad8e8a465663e92ced1"} Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.043406 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-log" containerID="cri-o://7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" gracePeriod=30 Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.043492 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-metadata" containerID="cri-o://0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" gracePeriod=30 Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.093305 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:26:32 crc kubenswrapper[4721]: E0202 13:26:32.093897 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="dnsmasq-dns" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.093914 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="dnsmasq-dns" Feb 02 13:26:32 crc kubenswrapper[4721]: E0202 13:26:32.093936 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d38541e-139a-425e-a7bd-f7c484f7266b" containerName="nova-cell1-conductor-db-sync" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.093942 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d38541e-139a-425e-a7bd-f7c484f7266b" containerName="nova-cell1-conductor-db-sync" Feb 02 13:26:32 crc kubenswrapper[4721]: E0202 13:26:32.093951 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798dac79-94bd-4655-b409-4b173956cdbf" containerName="nova-manage" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.093958 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="798dac79-94bd-4655-b409-4b173956cdbf" containerName="nova-manage" Feb 02 13:26:32 crc kubenswrapper[4721]: E0202 13:26:32.093998 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="init" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.094005 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="init" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.094222 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="dnsmasq-dns" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.094235 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="798dac79-94bd-4655-b409-4b173956cdbf" containerName="nova-manage" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.094246 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d38541e-139a-425e-a7bd-f7c484f7266b" containerName="nova-cell1-conductor-db-sync" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.095055 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.103268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.113281 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.197536 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8xt\" (UniqueName: \"kubernetes.io/projected/3830e692-ad9d-48c7-800f-dc63cadb2376-kube-api-access-np8xt\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.197778 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.198030 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.299404 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8xt\" (UniqueName: \"kubernetes.io/projected/3830e692-ad9d-48c7-800f-dc63cadb2376-kube-api-access-np8xt\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.299777 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.299848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.306173 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.306242 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3830e692-ad9d-48c7-800f-dc63cadb2376-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.326645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8xt\" (UniqueName: \"kubernetes.io/projected/3830e692-ad9d-48c7-800f-dc63cadb2376-kube-api-access-np8xt\") pod \"nova-cell1-conductor-0\" (UID: \"3830e692-ad9d-48c7-800f-dc63cadb2376\") " pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.435821 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.701628 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.830045 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data\") pod \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.830394 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs\") pod \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.830440 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle\") pod \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.830539 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs\") pod \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.830566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxlz\" (UniqueName: \"kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz\") pod \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\" (UID: \"07ae1ad9-237c-4b8f-acdf-ba750cc6316e\") " Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.833165 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs" (OuterVolumeSpecName: "logs") pod "07ae1ad9-237c-4b8f-acdf-ba750cc6316e" (UID: "07ae1ad9-237c-4b8f-acdf-ba750cc6316e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.839268 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz" (OuterVolumeSpecName: "kube-api-access-vhxlz") pod "07ae1ad9-237c-4b8f-acdf-ba750cc6316e" (UID: "07ae1ad9-237c-4b8f-acdf-ba750cc6316e"). InnerVolumeSpecName "kube-api-access-vhxlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.876432 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07ae1ad9-237c-4b8f-acdf-ba750cc6316e" (UID: "07ae1ad9-237c-4b8f-acdf-ba750cc6316e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.911758 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data" (OuterVolumeSpecName: "config-data") pod "07ae1ad9-237c-4b8f-acdf-ba750cc6316e" (UID: "07ae1ad9-237c-4b8f-acdf-ba750cc6316e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.934892 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.935201 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.935326 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.935233 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "07ae1ad9-237c-4b8f-acdf-ba750cc6316e" (UID: "07ae1ad9-237c-4b8f-acdf-ba750cc6316e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:32 crc kubenswrapper[4721]: I0202 13:26:32.935408 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhxlz\" (UniqueName: \"kubernetes.io/projected/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-kube-api-access-vhxlz\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.038317 4721 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/07ae1ad9-237c-4b8f-acdf-ba750cc6316e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.092671 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.093419 4721 generic.go:334] "Generic (PLEG): container finished" podID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerID="0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" exitCode=0 Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.093447 4721 generic.go:334] "Generic (PLEG): container finished" podID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerID="7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" exitCode=143 Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.093579 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.098550 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerDied","Data":"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4"} Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.098699 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerDied","Data":"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d"} Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.098718 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"07ae1ad9-237c-4b8f-acdf-ba750cc6316e","Type":"ContainerDied","Data":"401c3ded565d56700cb511af03efc19368dafbf26ff256c1b27d75fad4eec09d"} Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.098752 4721 scope.go:117] "RemoveContainer" containerID="0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.100121 4721 generic.go:334] "Generic (PLEG): container finished" podID="6deee91e-3b9b-46a0-a05e-613827b42808" containerID="3d9337e6ad6b91226e1d6dffcacca3ebd66fbbc0402ee3e8f5821c0fb854a95c" exitCode=0 Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.100151 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6deee91e-3b9b-46a0-a05e-613827b42808","Type":"ContainerDied","Data":"3d9337e6ad6b91226e1d6dffcacca3ebd66fbbc0402ee3e8f5821c0fb854a95c"} Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.209568 4721 scope.go:117] "RemoveContainer" containerID="7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.211968 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.225207 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.242751 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:33 crc kubenswrapper[4721]: E0202 13:26:33.243465 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-log" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.243492 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-log" Feb 02 13:26:33 crc kubenswrapper[4721]: E0202 13:26:33.243514 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-metadata" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.243523 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-metadata" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.243864 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-log" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.243894 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" containerName="nova-metadata-metadata" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.245535 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.246608 4721 scope.go:117] "RemoveContainer" containerID="0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" Feb 02 13:26:33 crc kubenswrapper[4721]: E0202 13:26:33.247080 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4\": container with ID starting with 0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4 not found: ID does not exist" containerID="0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.247113 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4"} err="failed to get container status \"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4\": rpc error: code = NotFound desc = could not find container \"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4\": container with ID starting with 0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4 not found: ID does not exist" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.247147 4721 scope.go:117] "RemoveContainer" containerID="7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.248906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:26:33 crc kubenswrapper[4721]: E0202 13:26:33.248933 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d\": container with ID starting with 7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d not found: ID does not exist" containerID="7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.248954 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d"} err="failed to get container status \"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d\": rpc error: code = NotFound desc = could not find container \"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d\": container with ID starting with 7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d not found: ID does not exist" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.248971 4721 scope.go:117] "RemoveContainer" containerID="0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.249017 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.249438 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4"} err="failed to get container status \"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4\": rpc error: code = NotFound desc = could not find container \"0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4\": container with ID starting with 0dd31ac8d5e5c820894217a072d1a44f08ac4334e8888db92c06bac98ee2a2c4 not found: ID does not exist" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.249482 4721 scope.go:117] "RemoveContainer" containerID="7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.249980 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d"} err="failed to get container status \"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d\": rpc error: code = NotFound desc = could not find container \"7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d\": container with ID starting with 7c2a7168cf5c08335a6fc6fc28321f7bb3209bb8dc3bb8f3d305f70c6c81ce1d not found: ID does not exist" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.265900 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.308029 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.348558 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle\") pod \"6deee91e-3b9b-46a0-a05e-613827b42808\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.348672 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data\") pod \"6deee91e-3b9b-46a0-a05e-613827b42808\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.348696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4h5x\" (UniqueName: \"kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x\") pod \"6deee91e-3b9b-46a0-a05e-613827b42808\" (UID: \"6deee91e-3b9b-46a0-a05e-613827b42808\") " Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.348994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t22jm\" (UniqueName: \"kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.349101 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.349138 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.349281 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.349396 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.361282 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x" (OuterVolumeSpecName: "kube-api-access-m4h5x") pod "6deee91e-3b9b-46a0-a05e-613827b42808" (UID: "6deee91e-3b9b-46a0-a05e-613827b42808"). InnerVolumeSpecName "kube-api-access-m4h5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.402234 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data" (OuterVolumeSpecName: "config-data") pod "6deee91e-3b9b-46a0-a05e-613827b42808" (UID: "6deee91e-3b9b-46a0-a05e-613827b42808"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.412241 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6deee91e-3b9b-46a0-a05e-613827b42808" (UID: "6deee91e-3b9b-46a0-a05e-613827b42808"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.450722 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.450799 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t22jm\" (UniqueName: \"kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.450908 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.450954 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.451157 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.451268 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.451283 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6deee91e-3b9b-46a0-a05e-613827b42808-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.451295 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4h5x\" (UniqueName: \"kubernetes.io/projected/6deee91e-3b9b-46a0-a05e-613827b42808-kube-api-access-m4h5x\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.451890 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.454359 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.455297 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.455732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.469930 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t22jm\" (UniqueName: \"kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm\") pod \"nova-metadata-0\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " pod="openstack/nova-metadata-0" Feb 02 13:26:33 crc kubenswrapper[4721]: I0202 13:26:33.562494 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.123280 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3830e692-ad9d-48c7-800f-dc63cadb2376","Type":"ContainerStarted","Data":"fc2c5eca83ec18e057bf45edfa8caefc16dff1bab315e2b7db26854a8f90f3b4"} Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.123659 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3830e692-ad9d-48c7-800f-dc63cadb2376","Type":"ContainerStarted","Data":"2881a9ef319197e2ad270396cac1ac972ea7ca641749e280f5adf40c46e58733"} Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.124183 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.127460 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6deee91e-3b9b-46a0-a05e-613827b42808","Type":"ContainerDied","Data":"6be24bc44a924556f23ce7d2ad20d369ad32da8a65f3765a7bf2193db6ff3cfc"} Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.127513 4721 scope.go:117] "RemoveContainer" containerID="3d9337e6ad6b91226e1d6dffcacca3ebd66fbbc0402ee3e8f5821c0fb854a95c" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.127645 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.136413 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.136448 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-m87tn" podUID="a217ca40-3638-474b-b739-cb8784823fa6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.219:5353: i/o timeout" Feb 02 13:26:34 crc kubenswrapper[4721]: W0202 13:26:34.142629 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbfc7ae7_2e8c_4696_a72e_7308794bf726.slice/crio-47ce14db062af36ae7fdbf425398b0057cb5127414ac2231235ece255a18295a WatchSource:0}: Error finding container 47ce14db062af36ae7fdbf425398b0057cb5127414ac2231235ece255a18295a: Status 404 returned error can't find the container with id 47ce14db062af36ae7fdbf425398b0057cb5127414ac2231235ece255a18295a Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.154663 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.154644651 podStartE2EDuration="2.154644651s" podCreationTimestamp="2026-02-02 13:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:34.148499553 +0000 UTC m=+1534.451013962" watchObservedRunningTime="2026-02-02 13:26:34.154644651 +0000 UTC m=+1534.457159040" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.227636 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.250573 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.268594 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:34 crc kubenswrapper[4721]: E0202 13:26:34.269145 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6deee91e-3b9b-46a0-a05e-613827b42808" containerName="nova-scheduler-scheduler" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.269164 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6deee91e-3b9b-46a0-a05e-613827b42808" containerName="nova-scheduler-scheduler" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.269377 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6deee91e-3b9b-46a0-a05e-613827b42808" containerName="nova-scheduler-scheduler" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.270231 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.273311 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.280304 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.280356 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5wd\" (UniqueName: \"kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.280424 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.281958 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.390097 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.390430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb5wd\" (UniqueName: \"kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.390613 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.397208 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.407959 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.410003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb5wd\" (UniqueName: \"kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd\") pod \"nova-scheduler-0\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " pod="openstack/nova-scheduler-0" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.428269 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ae1ad9-237c-4b8f-acdf-ba750cc6316e" path="/var/lib/kubelet/pods/07ae1ad9-237c-4b8f-acdf-ba750cc6316e/volumes" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.429185 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6deee91e-3b9b-46a0-a05e-613827b42808" path="/var/lib/kubelet/pods/6deee91e-3b9b-46a0-a05e-613827b42808/volumes" Feb 02 13:26:34 crc kubenswrapper[4721]: I0202 13:26:34.637245 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:26:35 crc kubenswrapper[4721]: I0202 13:26:35.148778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerStarted","Data":"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203"} Feb 02 13:26:35 crc kubenswrapper[4721]: I0202 13:26:35.149504 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerStarted","Data":"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229"} Feb 02 13:26:35 crc kubenswrapper[4721]: I0202 13:26:35.149522 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerStarted","Data":"47ce14db062af36ae7fdbf425398b0057cb5127414ac2231235ece255a18295a"} Feb 02 13:26:35 crc kubenswrapper[4721]: W0202 13:26:35.187522 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f591d46_b7ce_4767_987a_bcdaa2f6d3b1.slice/crio-069121de18c3f33f1304829d8169978a460562815bd16049d235be352ef578bb WatchSource:0}: Error finding container 069121de18c3f33f1304829d8169978a460562815bd16049d235be352ef578bb: Status 404 returned error can't find the container with id 069121de18c3f33f1304829d8169978a460562815bd16049d235be352ef578bb Feb 02 13:26:35 crc kubenswrapper[4721]: I0202 13:26:35.196528 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.196501347 podStartE2EDuration="2.196501347s" podCreationTimestamp="2026-02-02 13:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:35.166305137 +0000 UTC m=+1535.468819526" watchObservedRunningTime="2026-02-02 13:26:35.196501347 +0000 UTC m=+1535.499015736" Feb 02 13:26:35 crc kubenswrapper[4721]: I0202 13:26:35.213156 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:26:36 crc kubenswrapper[4721]: I0202 13:26:36.164254 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1","Type":"ContainerStarted","Data":"5acb431b8e0eb7677b98d89ff66f186391a5d39e1b675d018fe41d3b90c905cd"} Feb 02 13:26:36 crc kubenswrapper[4721]: I0202 13:26:36.164499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1","Type":"ContainerStarted","Data":"069121de18c3f33f1304829d8169978a460562815bd16049d235be352ef578bb"} Feb 02 13:26:36 crc kubenswrapper[4721]: I0202 13:26:36.194508 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.194485063 podStartE2EDuration="2.194485063s" podCreationTimestamp="2026-02-02 13:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:36.18554655 +0000 UTC m=+1536.488060939" watchObservedRunningTime="2026-02-02 13:26:36.194485063 +0000 UTC m=+1536.496999472" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.178537 4721 generic.go:334] "Generic (PLEG): container finished" podID="8715f5a7-97a1-496d-be28-13c326b54135" containerID="187aece21bb2425b5bf5ce341d27bac916d46fddbd34aeac0734b4f84ad64e63" exitCode=0 Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.182621 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerDied","Data":"187aece21bb2425b5bf5ce341d27bac916d46fddbd34aeac0734b4f84ad64e63"} Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.397961 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.429972 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs\") pod \"8715f5a7-97a1-496d-be28-13c326b54135\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.430261 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data\") pod \"8715f5a7-97a1-496d-be28-13c326b54135\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.430377 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle\") pod \"8715f5a7-97a1-496d-be28-13c326b54135\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.430432 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpktt\" (UniqueName: \"kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt\") pod \"8715f5a7-97a1-496d-be28-13c326b54135\" (UID: \"8715f5a7-97a1-496d-be28-13c326b54135\") " Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.432246 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs" (OuterVolumeSpecName: "logs") pod "8715f5a7-97a1-496d-be28-13c326b54135" (UID: "8715f5a7-97a1-496d-be28-13c326b54135"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.440872 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt" (OuterVolumeSpecName: "kube-api-access-wpktt") pod "8715f5a7-97a1-496d-be28-13c326b54135" (UID: "8715f5a7-97a1-496d-be28-13c326b54135"). InnerVolumeSpecName "kube-api-access-wpktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.465110 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data" (OuterVolumeSpecName: "config-data") pod "8715f5a7-97a1-496d-be28-13c326b54135" (UID: "8715f5a7-97a1-496d-be28-13c326b54135"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.477248 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8715f5a7-97a1-496d-be28-13c326b54135" (UID: "8715f5a7-97a1-496d-be28-13c326b54135"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.533130 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.533167 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8715f5a7-97a1-496d-be28-13c326b54135-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.533182 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpktt\" (UniqueName: \"kubernetes.io/projected/8715f5a7-97a1-496d-be28-13c326b54135-kube-api-access-wpktt\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:37 crc kubenswrapper[4721]: I0202 13:26:37.533190 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8715f5a7-97a1-496d-be28-13c326b54135-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.192116 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8715f5a7-97a1-496d-be28-13c326b54135","Type":"ContainerDied","Data":"dda040b5b2dc6f664a770c8e322b217a952cd150043f7182ef1730acf2437842"} Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.192405 4721 scope.go:117] "RemoveContainer" containerID="187aece21bb2425b5bf5ce341d27bac916d46fddbd34aeac0734b4f84ad64e63" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.192567 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.219138 4721 scope.go:117] "RemoveContainer" containerID="e678d0355031514e26c0e876e17ad1d6766f6e13bf506ad8e8a465663e92ced1" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.240783 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.266013 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.292493 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:38 crc kubenswrapper[4721]: E0202 13:26:38.294429 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-log" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.294468 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-log" Feb 02 13:26:38 crc kubenswrapper[4721]: E0202 13:26:38.294501 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-api" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.294513 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-api" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.294804 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-log" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.294837 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8715f5a7-97a1-496d-be28-13c326b54135" containerName="nova-api-api" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.296121 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.301850 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.308123 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.352845 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25b4g\" (UniqueName: \"kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.352943 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.352996 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.353042 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.424650 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8715f5a7-97a1-496d-be28-13c326b54135" path="/var/lib/kubelet/pods/8715f5a7-97a1-496d-be28-13c326b54135/volumes" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.455746 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25b4g\" (UniqueName: \"kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.455939 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.456047 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.456110 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.457393 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.464492 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.475868 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25b4g\" (UniqueName: \"kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.475883 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data\") pod \"nova-api-0\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " pod="openstack/nova-api-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.563117 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.564350 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:26:38 crc kubenswrapper[4721]: I0202 13:26:38.627399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:26:39 crc kubenswrapper[4721]: I0202 13:26:39.194574 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:26:39 crc kubenswrapper[4721]: W0202 13:26:39.201516 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a836c9_b9f7_4991_9cb4_db6dce6f8e08.slice/crio-4fca7f0bef96bd658599c1f0a2e8aa492cdcf6a60b42a42e88a85a2eec4a0be9 WatchSource:0}: Error finding container 4fca7f0bef96bd658599c1f0a2e8aa492cdcf6a60b42a42e88a85a2eec4a0be9: Status 404 returned error can't find the container with id 4fca7f0bef96bd658599c1f0a2e8aa492cdcf6a60b42a42e88a85a2eec4a0be9 Feb 02 13:26:39 crc kubenswrapper[4721]: I0202 13:26:39.638042 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:26:40 crc kubenswrapper[4721]: I0202 13:26:40.218665 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerStarted","Data":"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e"} Feb 02 13:26:40 crc kubenswrapper[4721]: I0202 13:26:40.220306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerStarted","Data":"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04"} Feb 02 13:26:40 crc kubenswrapper[4721]: I0202 13:26:40.220419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerStarted","Data":"4fca7f0bef96bd658599c1f0a2e8aa492cdcf6a60b42a42e88a85a2eec4a0be9"} Feb 02 13:26:40 crc kubenswrapper[4721]: I0202 13:26:40.245700 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.245678245 podStartE2EDuration="2.245678245s" podCreationTimestamp="2026-02-02 13:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:40.241028399 +0000 UTC m=+1540.543542788" watchObservedRunningTime="2026-02-02 13:26:40.245678245 +0000 UTC m=+1540.548192634" Feb 02 13:26:40 crc kubenswrapper[4721]: E0202 13:26:40.330628 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:41 crc kubenswrapper[4721]: E0202 13:26:41.078585 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:41 crc kubenswrapper[4721]: I0202 13:26:41.944941 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 13:26:42 crc kubenswrapper[4721]: I0202 13:26:42.471816 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 13:26:43 crc kubenswrapper[4721]: I0202 13:26:43.562784 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:26:43 crc kubenswrapper[4721]: I0202 13:26:43.562867 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:26:44 crc kubenswrapper[4721]: I0202 13:26:44.577376 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:44 crc kubenswrapper[4721]: I0202 13:26:44.578029 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:44 crc kubenswrapper[4721]: I0202 13:26:44.638248 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:26:44 crc kubenswrapper[4721]: I0202 13:26:44.670205 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:26:45 crc kubenswrapper[4721]: I0202 13:26:45.324489 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:26:46 crc kubenswrapper[4721]: I0202 13:26:46.459758 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:46 crc kubenswrapper[4721]: I0202 13:26:46.460518 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cc071000-a602-4de6-a9bc-1c93b6d58c25" containerName="kube-state-metrics" containerID="cri-o://28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3" gracePeriod=30 Feb 02 13:26:46 crc kubenswrapper[4721]: I0202 13:26:46.593711 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:46 crc kubenswrapper[4721]: I0202 13:26:46.593958 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="7a6930c7-1819-4b7d-baf6-773a8b68e568" containerName="mysqld-exporter" containerID="cri-o://425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73" gracePeriod=30 Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.113477 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.160137 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlrn8\" (UniqueName: \"kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8\") pod \"cc071000-a602-4de6-a9bc-1c93b6d58c25\" (UID: \"cc071000-a602-4de6-a9bc-1c93b6d58c25\") " Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.168265 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8" (OuterVolumeSpecName: "kube-api-access-wlrn8") pod "cc071000-a602-4de6-a9bc-1c93b6d58c25" (UID: "cc071000-a602-4de6-a9bc-1c93b6d58c25"). InnerVolumeSpecName "kube-api-access-wlrn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.241956 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.261653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle\") pod \"7a6930c7-1819-4b7d-baf6-773a8b68e568\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.261777 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data\") pod \"7a6930c7-1819-4b7d-baf6-773a8b68e568\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.261852 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jz54\" (UniqueName: \"kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54\") pod \"7a6930c7-1819-4b7d-baf6-773a8b68e568\" (UID: \"7a6930c7-1819-4b7d-baf6-773a8b68e568\") " Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.262558 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlrn8\" (UniqueName: \"kubernetes.io/projected/cc071000-a602-4de6-a9bc-1c93b6d58c25-kube-api-access-wlrn8\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.276673 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54" (OuterVolumeSpecName: "kube-api-access-8jz54") pod "7a6930c7-1819-4b7d-baf6-773a8b68e568" (UID: "7a6930c7-1819-4b7d-baf6-773a8b68e568"). InnerVolumeSpecName "kube-api-access-8jz54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.307268 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a6930c7-1819-4b7d-baf6-773a8b68e568" (UID: "7a6930c7-1819-4b7d-baf6-773a8b68e568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.311772 4721 generic.go:334] "Generic (PLEG): container finished" podID="7a6930c7-1819-4b7d-baf6-773a8b68e568" containerID="425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73" exitCode=2 Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.311837 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7a6930c7-1819-4b7d-baf6-773a8b68e568","Type":"ContainerDied","Data":"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73"} Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.311865 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7a6930c7-1819-4b7d-baf6-773a8b68e568","Type":"ContainerDied","Data":"29066421e6cce726a66f30e6952c937493f0f81dbe0ff9779f6c880b60322c1e"} Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.311881 4721 scope.go:117] "RemoveContainer" containerID="425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.312025 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.317770 4721 generic.go:334] "Generic (PLEG): container finished" podID="cc071000-a602-4de6-a9bc-1c93b6d58c25" containerID="28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3" exitCode=2 Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.317814 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc071000-a602-4de6-a9bc-1c93b6d58c25","Type":"ContainerDied","Data":"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3"} Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.317840 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc071000-a602-4de6-a9bc-1c93b6d58c25","Type":"ContainerDied","Data":"f06dd476cbb6c5a0d98d77cc9568acefbefff19f14d266eab26513942d1c3774"} Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.317890 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.350516 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data" (OuterVolumeSpecName: "config-data") pod "7a6930c7-1819-4b7d-baf6-773a8b68e568" (UID: "7a6930c7-1819-4b7d-baf6-773a8b68e568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.352685 4721 scope.go:117] "RemoveContainer" containerID="425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73" Feb 02 13:26:47 crc kubenswrapper[4721]: E0202 13:26:47.354494 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73\": container with ID starting with 425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73 not found: ID does not exist" containerID="425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.354566 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73"} err="failed to get container status \"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73\": rpc error: code = NotFound desc = could not find container \"425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73\": container with ID starting with 425d57884fbd57da1d35ea45143e44ac869dd71714bc4250dbb690863f08fa73 not found: ID does not exist" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.354648 4721 scope.go:117] "RemoveContainer" containerID="28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.366515 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.366926 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.366955 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6930c7-1819-4b7d-baf6-773a8b68e568-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.366968 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jz54\" (UniqueName: \"kubernetes.io/projected/7a6930c7-1819-4b7d-baf6-773a8b68e568-kube-api-access-8jz54\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.383277 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.390769 4721 scope.go:117] "RemoveContainer" containerID="28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3" Feb 02 13:26:47 crc kubenswrapper[4721]: E0202 13:26:47.392887 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3\": container with ID starting with 28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3 not found: ID does not exist" containerID="28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.392945 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3"} err="failed to get container status \"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3\": rpc error: code = NotFound desc = could not find container \"28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3\": container with ID starting with 28479592c480267adca57389ec0895d73f9fc0cf7e8ea4b979c6f7d9640013a3 not found: ID does not exist" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.399192 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: E0202 13:26:47.399719 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6930c7-1819-4b7d-baf6-773a8b68e568" containerName="mysqld-exporter" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.399739 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6930c7-1819-4b7d-baf6-773a8b68e568" containerName="mysqld-exporter" Feb 02 13:26:47 crc kubenswrapper[4721]: E0202 13:26:47.399781 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc071000-a602-4de6-a9bc-1c93b6d58c25" containerName="kube-state-metrics" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.399791 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc071000-a602-4de6-a9bc-1c93b6d58c25" containerName="kube-state-metrics" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.399999 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6930c7-1819-4b7d-baf6-773a8b68e568" containerName="mysqld-exporter" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.400042 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc071000-a602-4de6-a9bc-1c93b6d58c25" containerName="kube-state-metrics" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.400893 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.403192 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.405978 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.414558 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.468935 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.469060 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpdj\" (UniqueName: \"kubernetes.io/projected/ac827915-eefd-428b-9303-581069f92ed8-kube-api-access-fxpdj\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.469138 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.469305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.571550 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.571687 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.571756 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpdj\" (UniqueName: \"kubernetes.io/projected/ac827915-eefd-428b-9303-581069f92ed8-kube-api-access-fxpdj\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.571803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.575955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.577207 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.578581 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac827915-eefd-428b-9303-581069f92ed8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.597914 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpdj\" (UniqueName: \"kubernetes.io/projected/ac827915-eefd-428b-9303-581069f92ed8-kube-api-access-fxpdj\") pod \"kube-state-metrics-0\" (UID: \"ac827915-eefd-428b-9303-581069f92ed8\") " pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.722331 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.800461 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.830323 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.850555 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.853191 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.858914 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.859187 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.864058 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.986507 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.986596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bbp\" (UniqueName: \"kubernetes.io/projected/8abde028-43c5-4489-8de6-7c2da9f037c2-kube-api-access-t2bbp\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.986690 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:47 crc kubenswrapper[4721]: I0202 13:26:47.986709 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-config-data\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.089173 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.089219 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-config-data\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.089361 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.089444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bbp\" (UniqueName: \"kubernetes.io/projected/8abde028-43c5-4489-8de6-7c2da9f037c2-kube-api-access-t2bbp\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.097244 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.101863 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-config-data\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.102706 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abde028-43c5-4489-8de6-7c2da9f037c2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.131828 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bbp\" (UniqueName: \"kubernetes.io/projected/8abde028-43c5-4489-8de6-7c2da9f037c2-kube-api-access-t2bbp\") pod \"mysqld-exporter-0\" (UID: \"8abde028-43c5-4489-8de6-7c2da9f037c2\") " pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.212291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.340374 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 13:26:48 crc kubenswrapper[4721]: E0202 13:26:48.360909 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:48 crc kubenswrapper[4721]: E0202 13:26:48.361413 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.432713 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6930c7-1819-4b7d-baf6-773a8b68e568" path="/var/lib/kubelet/pods/7a6930c7-1819-4b7d-baf6-773a8b68e568/volumes" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.433357 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc071000-a602-4de6-a9bc-1c93b6d58c25" path="/var/lib/kubelet/pods/cc071000-a602-4de6-a9bc-1c93b6d58c25/volumes" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.629053 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.629411 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:26:48 crc kubenswrapper[4721]: I0202 13:26:48.840485 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.416313 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac827915-eefd-428b-9303-581069f92ed8","Type":"ContainerStarted","Data":"e2a207f99084f376858d13078ae9481c775747749fd195e00939bc5fb045a904"} Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.416617 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ac827915-eefd-428b-9303-581069f92ed8","Type":"ContainerStarted","Data":"38557ac7c7d354970f97a02935fc45e00a7133c6ffa25beb95cc1dab3b8d385f"} Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.418047 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.421041 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8abde028-43c5-4489-8de6-7c2da9f037c2","Type":"ContainerStarted","Data":"9df0c5b879f442ee98d22de6e236b64518fc82803847884f8e77d69a0544e020"} Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.437671 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.438096 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-central-agent" containerID="cri-o://941cea547a9c6139f4cd34b3cba1a9232469327b41a70627da4554d99e83c28b" gracePeriod=30 Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.438718 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="proxy-httpd" containerID="cri-o://4b856042309c84ea0a6e8c400ccf627a4542f6766de95577e1154ef8996d41d0" gracePeriod=30 Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.438797 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="sg-core" containerID="cri-o://2e794fb911c91ba362224447e9ecd49d098fbbb2f1fb15a45aedc118537561bc" gracePeriod=30 Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.438848 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-notification-agent" containerID="cri-o://3fc0efffdb16822dab9442b0b34ece997fbe26ebe5808aadd9adb66a693a7dd5" gracePeriod=30 Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.447466 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.070987255 podStartE2EDuration="2.447443595s" podCreationTimestamp="2026-02-02 13:26:47 +0000 UTC" firstStartedPulling="2026-02-02 13:26:48.372241245 +0000 UTC m=+1548.674755634" lastFinishedPulling="2026-02-02 13:26:48.748697585 +0000 UTC m=+1549.051211974" observedRunningTime="2026-02-02 13:26:49.437303621 +0000 UTC m=+1549.739818010" watchObservedRunningTime="2026-02-02 13:26:49.447443595 +0000 UTC m=+1549.749957994" Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.712253 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:49 crc kubenswrapper[4721]: I0202 13:26:49.712286 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464044 4721 generic.go:334] "Generic (PLEG): container finished" podID="dd964db7-c2d3-477b-be71-60058c811541" containerID="4b856042309c84ea0a6e8c400ccf627a4542f6766de95577e1154ef8996d41d0" exitCode=0 Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464491 4721 generic.go:334] "Generic (PLEG): container finished" podID="dd964db7-c2d3-477b-be71-60058c811541" containerID="2e794fb911c91ba362224447e9ecd49d098fbbb2f1fb15a45aedc118537561bc" exitCode=2 Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464507 4721 generic.go:334] "Generic (PLEG): container finished" podID="dd964db7-c2d3-477b-be71-60058c811541" containerID="941cea547a9c6139f4cd34b3cba1a9232469327b41a70627da4554d99e83c28b" exitCode=0 Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464571 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerDied","Data":"4b856042309c84ea0a6e8c400ccf627a4542f6766de95577e1154ef8996d41d0"} Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464609 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerDied","Data":"2e794fb911c91ba362224447e9ecd49d098fbbb2f1fb15a45aedc118537561bc"} Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.464625 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerDied","Data":"941cea547a9c6139f4cd34b3cba1a9232469327b41a70627da4554d99e83c28b"} Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.468000 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8abde028-43c5-4489-8de6-7c2da9f037c2","Type":"ContainerStarted","Data":"5d7454ef92695803ec7743b8396827a94b86a25fb897e83e2a1701b15a668f42"} Feb 02 13:26:50 crc kubenswrapper[4721]: I0202 13:26:50.495358 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.840500845 podStartE2EDuration="3.495334875s" podCreationTimestamp="2026-02-02 13:26:47 +0000 UTC" firstStartedPulling="2026-02-02 13:26:48.856525949 +0000 UTC m=+1549.159040328" lastFinishedPulling="2026-02-02 13:26:49.511359969 +0000 UTC m=+1549.813874358" observedRunningTime="2026-02-02 13:26:50.486979698 +0000 UTC m=+1550.789494077" watchObservedRunningTime="2026-02-02 13:26:50.495334875 +0000 UTC m=+1550.797849274" Feb 02 13:26:51 crc kubenswrapper[4721]: E0202 13:26:51.139528 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.500340 4721 generic.go:334] "Generic (PLEG): container finished" podID="dd964db7-c2d3-477b-be71-60058c811541" containerID="3fc0efffdb16822dab9442b0b34ece997fbe26ebe5808aadd9adb66a693a7dd5" exitCode=0 Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.500422 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerDied","Data":"3fc0efffdb16822dab9442b0b34ece997fbe26ebe5808aadd9adb66a693a7dd5"} Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.500679 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd964db7-c2d3-477b-be71-60058c811541","Type":"ContainerDied","Data":"72f8e4545197a36201cfdd4ca1128b7b9319c774b1a3e5c0ba44e89faa242906"} Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.500694 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f8e4545197a36201cfdd4ca1128b7b9319c774b1a3e5c0ba44e89faa242906" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.569945 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.575461 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.576824 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.579545 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736201 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlqf6\" (UniqueName: \"kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736244 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736417 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736593 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736649 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736686 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736731 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle\") pod \"dd964db7-c2d3-477b-be71-60058c811541\" (UID: \"dd964db7-c2d3-477b-be71-60058c811541\") " Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.736907 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.737423 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.737442 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd964db7-c2d3-477b-be71-60058c811541-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.741826 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts" (OuterVolumeSpecName: "scripts") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.743726 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6" (OuterVolumeSpecName: "kube-api-access-rlqf6") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "kube-api-access-rlqf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.779594 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.839375 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.839437 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlqf6\" (UniqueName: \"kubernetes.io/projected/dd964db7-c2d3-477b-be71-60058c811541-kube-api-access-rlqf6\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.839452 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.848312 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.861256 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data" (OuterVolumeSpecName: "config-data") pod "dd964db7-c2d3-477b-be71-60058c811541" (UID: "dd964db7-c2d3-477b-be71-60058c811541"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.941831 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:53 crc kubenswrapper[4721]: I0202 13:26:53.941866 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd964db7-c2d3-477b-be71-60058c811541-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.510987 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.519146 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.537184 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.548485 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.588666 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:54 crc kubenswrapper[4721]: E0202 13:26:54.589573 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-notification-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.589599 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-notification-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: E0202 13:26:54.589644 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="proxy-httpd" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.589653 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="proxy-httpd" Feb 02 13:26:54 crc kubenswrapper[4721]: E0202 13:26:54.589682 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-central-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.589692 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-central-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: E0202 13:26:54.589712 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="sg-core" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.589722 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="sg-core" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.590013 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="proxy-httpd" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.590038 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="sg-core" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.590095 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-notification-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.590109 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd964db7-c2d3-477b-be71-60058c811541" containerName="ceilometer-central-agent" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.592865 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.597144 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.597375 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.597541 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.625723 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.661816 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662247 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662508 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662668 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9dq\" (UniqueName: \"kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662767 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662813 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.662900 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.764952 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9dq\" (UniqueName: \"kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765053 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765103 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765161 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765226 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765362 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.765457 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.767026 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.767472 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.770447 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.771088 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.771601 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.772448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.772697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.790411 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9dq\" (UniqueName: \"kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq\") pod \"ceilometer-0\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " pod="openstack/ceilometer-0" Feb 02 13:26:54 crc kubenswrapper[4721]: I0202 13:26:54.918033 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:26:55 crc kubenswrapper[4721]: E0202 13:26:55.340130 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01b253a_c7c6_4c9e_a800_a1732ba06f37.slice/crio-5b3069f620076da85927b973fa828028181bff883e33dc0cd16ab04434fa94e6\": RecentStats: unable to find data in memory cache]" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.438447 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.509795 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle\") pod \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.509838 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data\") pod \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.509966 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh8l7\" (UniqueName: \"kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7\") pod \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\" (UID: \"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e\") " Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.525477 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7" (OuterVolumeSpecName: "kube-api-access-nh8l7") pod "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" (UID: "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e"). InnerVolumeSpecName "kube-api-access-nh8l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.542645 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" (UID: "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.544930 4721 generic.go:334] "Generic (PLEG): container finished" podID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" containerID="39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a" exitCode=137 Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.545078 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e","Type":"ContainerDied","Data":"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a"} Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.545144 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e","Type":"ContainerDied","Data":"b40b44f42b58694b1e7abb93930ae837bfaf27b3b7a9cd3931ed69ef1a81d994"} Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.545167 4721 scope.go:117] "RemoveContainer" containerID="39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.545037 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.557195 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data" (OuterVolumeSpecName: "config-data") pod "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" (UID: "20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.613841 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.613877 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.613891 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh8l7\" (UniqueName: \"kubernetes.io/projected/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e-kube-api-access-nh8l7\") on node \"crc\" DevicePath \"\"" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.663244 4721 scope.go:117] "RemoveContainer" containerID="39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a" Feb 02 13:26:55 crc kubenswrapper[4721]: E0202 13:26:55.663702 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a\": container with ID starting with 39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a not found: ID does not exist" containerID="39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.663756 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a"} err="failed to get container status \"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a\": rpc error: code = NotFound desc = could not find container \"39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a\": container with ID starting with 39bda0040f1cfe72cdc0ea0bc846c40e1727cedfc5ccf95841b81f4bbd9ed35a not found: ID does not exist" Feb 02 13:26:55 crc kubenswrapper[4721]: W0202 13:26:55.742993 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff077fb_9974_49d0_a292_6ec2e865fb66.slice/crio-0f5022370d32f82b4c12f27c9216defd2b800b46bb0788cdc6b534feca22926c WatchSource:0}: Error finding container 0f5022370d32f82b4c12f27c9216defd2b800b46bb0788cdc6b534feca22926c: Status 404 returned error can't find the container with id 0f5022370d32f82b4c12f27c9216defd2b800b46bb0788cdc6b534feca22926c Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.749424 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.886474 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.902485 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.914194 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:55 crc kubenswrapper[4721]: E0202 13:26:55.915136 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.915171 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.915469 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.917096 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.919406 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.920008 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.921134 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 13:26:55 crc kubenswrapper[4721]: I0202 13:26:55.926282 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.021930 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.022014 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.022112 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.022194 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.022252 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cwt\" (UniqueName: \"kubernetes.io/projected/ab8f3d4c-b262-4b71-a934-f584c1f07790-kube-api-access-s2cwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.124279 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.124373 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.124466 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.124528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cwt\" (UniqueName: \"kubernetes.io/projected/ab8f3d4c-b262-4b71-a934-f584c1f07790-kube-api-access-s2cwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.124583 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.133032 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.133059 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.133126 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.133548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8f3d4c-b262-4b71-a934-f584c1f07790-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.142657 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cwt\" (UniqueName: \"kubernetes.io/projected/ab8f3d4c-b262-4b71-a934-f584c1f07790-kube-api-access-s2cwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab8f3d4c-b262-4b71-a934-f584c1f07790\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.241251 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.428321 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e" path="/var/lib/kubelet/pods/20108eee-c30d-4d6b-b3d1-bbebfe4e9a0e/volumes" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.430715 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd964db7-c2d3-477b-be71-60058c811541" path="/var/lib/kubelet/pods/dd964db7-c2d3-477b-be71-60058c811541/volumes" Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.561715 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerStarted","Data":"0f5022370d32f82b4c12f27c9216defd2b800b46bb0788cdc6b534feca22926c"} Feb 02 13:26:56 crc kubenswrapper[4721]: I0202 13:26:56.766002 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 13:26:56 crc kubenswrapper[4721]: W0202 13:26:56.768814 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8f3d4c_b262_4b71_a934_f584c1f07790.slice/crio-cdfe31faf5b0bb74e1240fa508841b91b1fcd38c1013c3456b6f88b75cf2e3c3 WatchSource:0}: Error finding container cdfe31faf5b0bb74e1240fa508841b91b1fcd38c1013c3456b6f88b75cf2e3c3: Status 404 returned error can't find the container with id cdfe31faf5b0bb74e1240fa508841b91b1fcd38c1013c3456b6f88b75cf2e3c3 Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.576029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerStarted","Data":"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db"} Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.576400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerStarted","Data":"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8"} Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.578755 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab8f3d4c-b262-4b71-a934-f584c1f07790","Type":"ContainerStarted","Data":"f898c6d26001e92c4ce65251bab93bd8ae9829ada56e10c907665d13d9cac82c"} Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.578796 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab8f3d4c-b262-4b71-a934-f584c1f07790","Type":"ContainerStarted","Data":"cdfe31faf5b0bb74e1240fa508841b91b1fcd38c1013c3456b6f88b75cf2e3c3"} Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.603731 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.603710141 podStartE2EDuration="2.603710141s" podCreationTimestamp="2026-02-02 13:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:26:57.595513169 +0000 UTC m=+1557.898027558" watchObservedRunningTime="2026-02-02 13:26:57.603710141 +0000 UTC m=+1557.906224530" Feb 02 13:26:57 crc kubenswrapper[4721]: I0202 13:26:57.734745 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 13:26:58 crc kubenswrapper[4721]: I0202 13:26:58.637010 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerStarted","Data":"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28"} Feb 02 13:26:58 crc kubenswrapper[4721]: I0202 13:26:58.642589 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:26:58 crc kubenswrapper[4721]: I0202 13:26:58.642955 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:26:58 crc kubenswrapper[4721]: I0202 13:26:58.646371 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:26:58 crc kubenswrapper[4721]: I0202 13:26:58.647600 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.647856 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.651241 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.845855 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5"] Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.848489 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.854721 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5"] Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.917990 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.918131 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.918166 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.918193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.918220 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65lcl\" (UniqueName: \"kubernetes.io/projected/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-kube-api-access-65lcl\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:26:59 crc kubenswrapper[4721]: I0202 13:26:59.918314 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020491 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020570 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020703 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020729 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.020762 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65lcl\" (UniqueName: \"kubernetes.io/projected/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-kube-api-access-65lcl\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.021594 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.021649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.021728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.021728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.022412 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.039797 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65lcl\" (UniqueName: \"kubernetes.io/projected/1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b-kube-api-access-65lcl\") pod \"dnsmasq-dns-6b7bbf7cf9-fz8n5\" (UID: \"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.184405 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:00 crc kubenswrapper[4721]: I0202 13:27:00.804264 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5"] Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.242332 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.700035 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerStarted","Data":"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857"} Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.700500 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.702870 4721 generic.go:334] "Generic (PLEG): container finished" podID="1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b" containerID="c693a0278e26b7b8778cdf577503d9b8dcaf5fec288f6574f9f36e14a5f3bf1a" exitCode=0 Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.703710 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" event={"ID":"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b","Type":"ContainerDied","Data":"c693a0278e26b7b8778cdf577503d9b8dcaf5fec288f6574f9f36e14a5f3bf1a"} Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.703737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" event={"ID":"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b","Type":"ContainerStarted","Data":"46b041a1c8868b769f0e6d94df9476e5e601bc8f9599ee2e011a89fba3ef680f"} Feb 02 13:27:01 crc kubenswrapper[4721]: I0202 13:27:01.742296 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9654303 podStartE2EDuration="7.742273723s" podCreationTimestamp="2026-02-02 13:26:54 +0000 UTC" firstStartedPulling="2026-02-02 13:26:55.745443493 +0000 UTC m=+1556.047957902" lastFinishedPulling="2026-02-02 13:27:00.522286926 +0000 UTC m=+1560.824801325" observedRunningTime="2026-02-02 13:27:01.736738703 +0000 UTC m=+1562.039253102" watchObservedRunningTime="2026-02-02 13:27:01.742273723 +0000 UTC m=+1562.044788102" Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.517434 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.717095 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" event={"ID":"1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b","Type":"ContainerStarted","Data":"dbc6deced1d62c44c9d8f674e3578530eef49c5eab475dff801a9bd1c8e471f4"} Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.718273 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-log" containerID="cri-o://59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04" gracePeriod=30 Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.718426 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-api" containerID="cri-o://cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e" gracePeriod=30 Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.719365 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:02 crc kubenswrapper[4721]: I0202 13:27:02.750119 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" podStartSLOduration=3.750098437 podStartE2EDuration="3.750098437s" podCreationTimestamp="2026-02-02 13:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:02.746671283 +0000 UTC m=+1563.049185682" watchObservedRunningTime="2026-02-02 13:27:02.750098437 +0000 UTC m=+1563.052612826" Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.754652 4721 generic.go:334] "Generic (PLEG): container finished" podID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerID="59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04" exitCode=143 Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.754697 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerDied","Data":"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04"} Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.885222 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.886679 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-central-agent" containerID="cri-o://7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8" gracePeriod=30 Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.886907 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="proxy-httpd" containerID="cri-o://841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857" gracePeriod=30 Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.887101 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="sg-core" containerID="cri-o://948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28" gracePeriod=30 Feb 02 13:27:03 crc kubenswrapper[4721]: I0202 13:27:03.887132 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-notification-agent" containerID="cri-o://21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db" gracePeriod=30 Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.768953 4721 generic.go:334] "Generic (PLEG): container finished" podID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerID="841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857" exitCode=0 Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.769304 4721 generic.go:334] "Generic (PLEG): container finished" podID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerID="948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28" exitCode=2 Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.769318 4721 generic.go:334] "Generic (PLEG): container finished" podID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerID="21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db" exitCode=0 Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.770544 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerDied","Data":"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857"} Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.770579 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerDied","Data":"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28"} Feb 02 13:27:04 crc kubenswrapper[4721]: I0202 13:27:04.770594 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerDied","Data":"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db"} Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.518146 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.643671 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.643761 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.643881 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.643936 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.643991 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.644036 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.644631 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.644544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.644968 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s9dq\" (UniqueName: \"kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq\") pod \"aff077fb-9974-49d0-a292-6ec2e865fb66\" (UID: \"aff077fb-9974-49d0-a292-6ec2e865fb66\") " Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.644560 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.646002 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.646024 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aff077fb-9974-49d0-a292-6ec2e865fb66-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.649767 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq" (OuterVolumeSpecName: "kube-api-access-4s9dq") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "kube-api-access-4s9dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.649954 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts" (OuterVolumeSpecName: "scripts") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.680615 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.714335 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.748733 4721 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.748774 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s9dq\" (UniqueName: \"kubernetes.io/projected/aff077fb-9974-49d0-a292-6ec2e865fb66-kube-api-access-4s9dq\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.748787 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.748798 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.752299 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.768699 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data" (OuterVolumeSpecName: "config-data") pod "aff077fb-9974-49d0-a292-6ec2e865fb66" (UID: "aff077fb-9974-49d0-a292-6ec2e865fb66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.782607 4721 generic.go:334] "Generic (PLEG): container finished" podID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerID="7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8" exitCode=0 Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.782658 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerDied","Data":"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8"} Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.782668 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.782697 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aff077fb-9974-49d0-a292-6ec2e865fb66","Type":"ContainerDied","Data":"0f5022370d32f82b4c12f27c9216defd2b800b46bb0788cdc6b534feca22926c"} Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.782718 4721 scope.go:117] "RemoveContainer" containerID="841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.824703 4721 scope.go:117] "RemoveContainer" containerID="948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.851818 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.851857 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff077fb-9974-49d0-a292-6ec2e865fb66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.851886 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.855796 4721 scope.go:117] "RemoveContainer" containerID="21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.871101 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.884027 4721 scope.go:117] "RemoveContainer" containerID="7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.884613 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.885361 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-notification-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885385 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-notification-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.885416 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-central-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885427 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-central-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.885451 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="sg-core" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885461 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="sg-core" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.885483 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="proxy-httpd" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885491 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="proxy-httpd" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885701 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-notification-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885733 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="sg-core" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885746 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="ceilometer-central-agent" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.885965 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" containerName="proxy-httpd" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.888244 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.890196 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.890472 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.890590 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.900600 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.915630 4721 scope.go:117] "RemoveContainer" containerID="841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.916056 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857\": container with ID starting with 841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857 not found: ID does not exist" containerID="841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916100 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857"} err="failed to get container status \"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857\": rpc error: code = NotFound desc = could not find container \"841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857\": container with ID starting with 841b1d028b76ecebf6456a6190bf54216a9623cac4339240f3c8ea1ce8bad857 not found: ID does not exist" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916120 4721 scope.go:117] "RemoveContainer" containerID="948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.916432 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28\": container with ID starting with 948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28 not found: ID does not exist" containerID="948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916564 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28"} err="failed to get container status \"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28\": rpc error: code = NotFound desc = could not find container \"948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28\": container with ID starting with 948f6018527aef7a4550fc8c54317e9398a8dbb4dac55b06acf1eae76dacca28 not found: ID does not exist" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916579 4721 scope.go:117] "RemoveContainer" containerID="21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.916898 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db\": container with ID starting with 21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db not found: ID does not exist" containerID="21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916919 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db"} err="failed to get container status \"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db\": rpc error: code = NotFound desc = could not find container \"21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db\": container with ID starting with 21ff204c5ac14a91dcf68f01d3b2281bff194d3d4953fe8e4f71719ba097f0db not found: ID does not exist" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.916931 4721 scope.go:117] "RemoveContainer" containerID="7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8" Feb 02 13:27:05 crc kubenswrapper[4721]: E0202 13:27:05.917237 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8\": container with ID starting with 7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8 not found: ID does not exist" containerID="7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.917259 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8"} err="failed to get container status \"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8\": rpc error: code = NotFound desc = could not find container \"7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8\": container with ID starting with 7c6cd5845fd6326a60d04161ca6495067b38e25fced7ab0983a3a262f70554d8 not found: ID does not exist" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953425 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953517 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-log-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953581 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxsfc\" (UniqueName: \"kubernetes.io/projected/c678f02b-cbee-4578-9e28-067b63af2682-kube-api-access-fxsfc\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953667 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-scripts\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953711 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953746 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-run-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953778 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:05 crc kubenswrapper[4721]: I0202 13:27:05.953916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-config-data\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.055973 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-scripts\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056041 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056088 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-run-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056110 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-config-data\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056219 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056907 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-log-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.056976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxsfc\" (UniqueName: \"kubernetes.io/projected/c678f02b-cbee-4578-9e28-067b63af2682-kube-api-access-fxsfc\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.057208 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-run-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.057316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c678f02b-cbee-4578-9e28-067b63af2682-log-httpd\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.060461 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.060546 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.061735 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-scripts\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.062010 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-config-data\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.062880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c678f02b-cbee-4578-9e28-067b63af2682-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.078912 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxsfc\" (UniqueName: \"kubernetes.io/projected/c678f02b-cbee-4578-9e28-067b63af2682-kube-api-access-fxsfc\") pod \"ceilometer-0\" (UID: \"c678f02b-cbee-4578-9e28-067b63af2682\") " pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.205789 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.242011 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.278653 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.422971 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff077fb-9974-49d0-a292-6ec2e865fb66" path="/var/lib/kubelet/pods/aff077fb-9974-49d0-a292-6ec2e865fb66/volumes" Feb 02 13:27:06 crc kubenswrapper[4721]: W0202 13:27:06.661662 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc678f02b_cbee_4578_9e28_067b63af2682.slice/crio-196c1ad24ffe9fbb8c6037d1f0687a36a8874420cbb3c304dbf4cdf92e555e7b WatchSource:0}: Error finding container 196c1ad24ffe9fbb8c6037d1f0687a36a8874420cbb3c304dbf4cdf92e555e7b: Status 404 returned error can't find the container with id 196c1ad24ffe9fbb8c6037d1f0687a36a8874420cbb3c304dbf4cdf92e555e7b Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.664275 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.773213 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.813218 4721 generic.go:334] "Generic (PLEG): container finished" podID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerID="cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e" exitCode=0 Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.813283 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerDied","Data":"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e"} Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.813308 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"43a836c9-b9f7-4991-9cb4-db6dce6f8e08","Type":"ContainerDied","Data":"4fca7f0bef96bd658599c1f0a2e8aa492cdcf6a60b42a42e88a85a2eec4a0be9"} Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.813325 4721 scope.go:117] "RemoveContainer" containerID="cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.813431 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.826591 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c678f02b-cbee-4578-9e28-067b63af2682","Type":"ContainerStarted","Data":"196c1ad24ffe9fbb8c6037d1f0687a36a8874420cbb3c304dbf4cdf92e555e7b"} Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.863005 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.876115 4721 scope.go:117] "RemoveContainer" containerID="59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.880080 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs\") pod \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.880367 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25b4g\" (UniqueName: \"kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g\") pod \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.884867 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle\") pod \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.884926 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data\") pod \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\" (UID: \"43a836c9-b9f7-4991-9cb4-db6dce6f8e08\") " Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.887322 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs" (OuterVolumeSpecName: "logs") pod "43a836c9-b9f7-4991-9cb4-db6dce6f8e08" (UID: "43a836c9-b9f7-4991-9cb4-db6dce6f8e08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.902819 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g" (OuterVolumeSpecName: "kube-api-access-25b4g") pod "43a836c9-b9f7-4991-9cb4-db6dce6f8e08" (UID: "43a836c9-b9f7-4991-9cb4-db6dce6f8e08"). InnerVolumeSpecName "kube-api-access-25b4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.919432 4721 scope.go:117] "RemoveContainer" containerID="cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e" Feb 02 13:27:06 crc kubenswrapper[4721]: E0202 13:27:06.920839 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e\": container with ID starting with cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e not found: ID does not exist" containerID="cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.920878 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e"} err="failed to get container status \"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e\": rpc error: code = NotFound desc = could not find container \"cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e\": container with ID starting with cb14a013f6e5ea8bc11bf1264095a5d3c6d4ad516551071c013c66ca9a01bd5e not found: ID does not exist" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.920907 4721 scope.go:117] "RemoveContainer" containerID="59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04" Feb 02 13:27:06 crc kubenswrapper[4721]: E0202 13:27:06.921966 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04\": container with ID starting with 59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04 not found: ID does not exist" containerID="59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.922007 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04"} err="failed to get container status \"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04\": rpc error: code = NotFound desc = could not find container \"59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04\": container with ID starting with 59c81dc2121b1526d3fc9cd11a40475f9cc813f682849fa4432ebb8a77fa2a04 not found: ID does not exist" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.946257 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43a836c9-b9f7-4991-9cb4-db6dce6f8e08" (UID: "43a836c9-b9f7-4991-9cb4-db6dce6f8e08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.951308 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data" (OuterVolumeSpecName: "config-data") pod "43a836c9-b9f7-4991-9cb4-db6dce6f8e08" (UID: "43a836c9-b9f7-4991-9cb4-db6dce6f8e08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.991445 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.991650 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.991740 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:06 crc kubenswrapper[4721]: I0202 13:27:06.991807 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25b4g\" (UniqueName: \"kubernetes.io/projected/43a836c9-b9f7-4991-9cb4-db6dce6f8e08-kube-api-access-25b4g\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.024944 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-crdjt"] Feb 02 13:27:07 crc kubenswrapper[4721]: E0202 13:27:07.025624 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-api" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.025648 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-api" Feb 02 13:27:07 crc kubenswrapper[4721]: E0202 13:27:07.025670 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-log" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.025681 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-log" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.025978 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-api" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.026004 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" containerName="nova-api-log" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.027139 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.030621 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.030846 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.055503 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-crdjt"] Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.094379 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.094668 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.094779 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6svz\" (UniqueName: \"kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.094909 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.197847 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.198057 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.198140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6svz\" (UniqueName: \"kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.198229 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.203198 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.203363 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.219989 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.222349 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6svz\" (UniqueName: \"kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz\") pod \"nova-cell1-cell-mapping-crdjt\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.338989 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.364179 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.382833 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.385698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.391743 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.392028 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.392308 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.401851 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.433172 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508341 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508413 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww68\" (UniqueName: \"kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508523 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508618 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508818 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.508893 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.612817 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613202 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613237 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613417 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613441 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rww68\" (UniqueName: \"kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.613855 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.617840 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.619385 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.619423 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.621466 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.634024 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rww68\" (UniqueName: \"kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68\") pod \"nova-api-0\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.708570 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:07 crc kubenswrapper[4721]: I0202 13:27:07.961609 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c678f02b-cbee-4578-9e28-067b63af2682","Type":"ContainerStarted","Data":"0528daf98695dfbeae8802550b4ef40b0a4707fa1f775f8e1fb16ee705df3595"} Feb 02 13:27:08 crc kubenswrapper[4721]: I0202 13:27:08.046260 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-crdjt"] Feb 02 13:27:08 crc kubenswrapper[4721]: W0202 13:27:08.262269 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod136a6410_a20a_4e6a_bbc4_aaa3634e8af7.slice/crio-c6c02957f5278ef86c39592f8b9cb95239b0996358dc335274d947ae3f698b69 WatchSource:0}: Error finding container c6c02957f5278ef86c39592f8b9cb95239b0996358dc335274d947ae3f698b69: Status 404 returned error can't find the container with id c6c02957f5278ef86c39592f8b9cb95239b0996358dc335274d947ae3f698b69 Feb 02 13:27:08 crc kubenswrapper[4721]: I0202 13:27:08.265173 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:08 crc kubenswrapper[4721]: I0202 13:27:08.423880 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a836c9-b9f7-4991-9cb4-db6dce6f8e08" path="/var/lib/kubelet/pods/43a836c9-b9f7-4991-9cb4-db6dce6f8e08/volumes" Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.012634 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crdjt" event={"ID":"27df0911-fe79-4339-a6fe-cf538f97a247","Type":"ContainerStarted","Data":"38697017d92e58f9ce89dc861401173c6d0237bce2d2e6a9a0bc00c9d2093bfd"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.012698 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crdjt" event={"ID":"27df0911-fe79-4339-a6fe-cf538f97a247","Type":"ContainerStarted","Data":"4a54c161cdb3b885270001f046db0e6690c82344ac0228a96c3ecc48853db922"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.015059 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c678f02b-cbee-4578-9e28-067b63af2682","Type":"ContainerStarted","Data":"649ef2ed3aa64d3d89d7aea442bebcd1359026e865b111008bd46d8dcb55496f"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.017539 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerStarted","Data":"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.017576 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerStarted","Data":"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.017588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerStarted","Data":"c6c02957f5278ef86c39592f8b9cb95239b0996358dc335274d947ae3f698b69"} Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.045563 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-crdjt" podStartSLOduration=3.045541065 podStartE2EDuration="3.045541065s" podCreationTimestamp="2026-02-02 13:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:09.026851738 +0000 UTC m=+1569.329366127" watchObservedRunningTime="2026-02-02 13:27:09.045541065 +0000 UTC m=+1569.348055454" Feb 02 13:27:09 crc kubenswrapper[4721]: I0202 13:27:09.081816 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.081795298 podStartE2EDuration="2.081795298s" podCreationTimestamp="2026-02-02 13:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:09.063749228 +0000 UTC m=+1569.366263637" watchObservedRunningTime="2026-02-02 13:27:09.081795298 +0000 UTC m=+1569.384309707" Feb 02 13:27:10 crc kubenswrapper[4721]: I0202 13:27:10.038579 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c678f02b-cbee-4578-9e28-067b63af2682","Type":"ContainerStarted","Data":"937d004a2c0e368a16d6d0ca670b5482820ecb8e33a8a090dda3618c8b3e7f90"} Feb 02 13:27:10 crc kubenswrapper[4721]: I0202 13:27:10.186219 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fz8n5" Feb 02 13:27:10 crc kubenswrapper[4721]: I0202 13:27:10.268239 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:27:10 crc kubenswrapper[4721]: I0202 13:27:10.268484 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="dnsmasq-dns" containerID="cri-o://513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d" gracePeriod=10 Feb 02 13:27:10 crc kubenswrapper[4721]: E0202 13:27:10.376341 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb87e6edb_4947_41a9_b95c_5120f9b4dbdc.slice/crio-513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:27:10 crc kubenswrapper[4721]: I0202 13:27:10.964198 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035054 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035111 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035226 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clh4b\" (UniqueName: \"kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035272 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.035346 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb\") pod \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\" (UID: \"b87e6edb-4947-41a9-b95c-5120f9b4dbdc\") " Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.040570 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b" (OuterVolumeSpecName: "kube-api-access-clh4b") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "kube-api-access-clh4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.079697 4721 generic.go:334] "Generic (PLEG): container finished" podID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerID="513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d" exitCode=0 Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.079785 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.079805 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" event={"ID":"b87e6edb-4947-41a9-b95c-5120f9b4dbdc","Type":"ContainerDied","Data":"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d"} Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.081127 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pclnt" event={"ID":"b87e6edb-4947-41a9-b95c-5120f9b4dbdc","Type":"ContainerDied","Data":"5099999a072997f647e2393c3c5cb7d2e07ed8a533074bc15ca483b0989cce5e"} Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.081152 4721 scope.go:117] "RemoveContainer" containerID="513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.117178 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.135486 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.140276 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clh4b\" (UniqueName: \"kubernetes.io/projected/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-kube-api-access-clh4b\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.141475 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.141494 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.199981 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.215590 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config" (OuterVolumeSpecName: "config") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.217053 4721 scope.go:117] "RemoveContainer" containerID="4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.244149 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-config\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.244188 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.249713 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b87e6edb-4947-41a9-b95c-5120f9b4dbdc" (UID: "b87e6edb-4947-41a9-b95c-5120f9b4dbdc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.261875 4721 scope.go:117] "RemoveContainer" containerID="513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d" Feb 02 13:27:11 crc kubenswrapper[4721]: E0202 13:27:11.262309 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d\": container with ID starting with 513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d not found: ID does not exist" containerID="513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.262352 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d"} err="failed to get container status \"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d\": rpc error: code = NotFound desc = could not find container \"513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d\": container with ID starting with 513383a4b8dba8040ab714a52c634492b679dbd50ddd5850c311449d4d0f662d not found: ID does not exist" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.262382 4721 scope.go:117] "RemoveContainer" containerID="4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c" Feb 02 13:27:11 crc kubenswrapper[4721]: E0202 13:27:11.262888 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c\": container with ID starting with 4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c not found: ID does not exist" containerID="4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.262995 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c"} err="failed to get container status \"4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c\": rpc error: code = NotFound desc = could not find container \"4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c\": container with ID starting with 4daf317fd25f62ffeb9c6c1f2bdb1a7d756e3cbacfa5fcec5e77dfc48d7f394c not found: ID does not exist" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.345993 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b87e6edb-4947-41a9-b95c-5120f9b4dbdc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.471468 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:27:11 crc kubenswrapper[4721]: I0202 13:27:11.484980 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pclnt"] Feb 02 13:27:12 crc kubenswrapper[4721]: I0202 13:27:12.100855 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c678f02b-cbee-4578-9e28-067b63af2682","Type":"ContainerStarted","Data":"cd5a14c0f956517b7ab15b3a6ee8d26d1a69dbd8f7c7525fbe48b4f147cbbb30"} Feb 02 13:27:12 crc kubenswrapper[4721]: I0202 13:27:12.102879 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 13:27:12 crc kubenswrapper[4721]: I0202 13:27:12.127739 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.301826104 podStartE2EDuration="7.127721127s" podCreationTimestamp="2026-02-02 13:27:05 +0000 UTC" firstStartedPulling="2026-02-02 13:27:06.664839148 +0000 UTC m=+1566.967353537" lastFinishedPulling="2026-02-02 13:27:11.490734171 +0000 UTC m=+1571.793248560" observedRunningTime="2026-02-02 13:27:12.12342742 +0000 UTC m=+1572.425941819" watchObservedRunningTime="2026-02-02 13:27:12.127721127 +0000 UTC m=+1572.430235516" Feb 02 13:27:12 crc kubenswrapper[4721]: I0202 13:27:12.423524 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" path="/var/lib/kubelet/pods/b87e6edb-4947-41a9-b95c-5120f9b4dbdc/volumes" Feb 02 13:27:14 crc kubenswrapper[4721]: I0202 13:27:14.136332 4721 generic.go:334] "Generic (PLEG): container finished" podID="27df0911-fe79-4339-a6fe-cf538f97a247" containerID="38697017d92e58f9ce89dc861401173c6d0237bce2d2e6a9a0bc00c9d2093bfd" exitCode=0 Feb 02 13:27:14 crc kubenswrapper[4721]: I0202 13:27:14.136387 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crdjt" event={"ID":"27df0911-fe79-4339-a6fe-cf538f97a247","Type":"ContainerDied","Data":"38697017d92e58f9ce89dc861401173c6d0237bce2d2e6a9a0bc00c9d2093bfd"} Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.624681 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.764784 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts\") pod \"27df0911-fe79-4339-a6fe-cf538f97a247\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.765115 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data\") pod \"27df0911-fe79-4339-a6fe-cf538f97a247\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.765239 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6svz\" (UniqueName: \"kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz\") pod \"27df0911-fe79-4339-a6fe-cf538f97a247\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.765565 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle\") pod \"27df0911-fe79-4339-a6fe-cf538f97a247\" (UID: \"27df0911-fe79-4339-a6fe-cf538f97a247\") " Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.772592 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts" (OuterVolumeSpecName: "scripts") pod "27df0911-fe79-4339-a6fe-cf538f97a247" (UID: "27df0911-fe79-4339-a6fe-cf538f97a247"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.772960 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz" (OuterVolumeSpecName: "kube-api-access-m6svz") pod "27df0911-fe79-4339-a6fe-cf538f97a247" (UID: "27df0911-fe79-4339-a6fe-cf538f97a247"). InnerVolumeSpecName "kube-api-access-m6svz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.798531 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data" (OuterVolumeSpecName: "config-data") pod "27df0911-fe79-4339-a6fe-cf538f97a247" (UID: "27df0911-fe79-4339-a6fe-cf538f97a247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.799968 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27df0911-fe79-4339-a6fe-cf538f97a247" (UID: "27df0911-fe79-4339-a6fe-cf538f97a247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.868489 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.868772 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.868780 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df0911-fe79-4339-a6fe-cf538f97a247-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:15 crc kubenswrapper[4721]: I0202 13:27:15.868791 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6svz\" (UniqueName: \"kubernetes.io/projected/27df0911-fe79-4339-a6fe-cf538f97a247-kube-api-access-m6svz\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.162060 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-crdjt" event={"ID":"27df0911-fe79-4339-a6fe-cf538f97a247","Type":"ContainerDied","Data":"4a54c161cdb3b885270001f046db0e6690c82344ac0228a96c3ecc48853db922"} Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.162117 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a54c161cdb3b885270001f046db0e6690c82344ac0228a96c3ecc48853db922" Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.162200 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-crdjt" Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.347417 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.347696 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" containerName="nova-scheduler-scheduler" containerID="cri-o://5acb431b8e0eb7677b98d89ff66f186391a5d39e1b675d018fe41d3b90c905cd" gracePeriod=30 Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.361096 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.361933 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-api" containerID="cri-o://bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" gracePeriod=30 Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.362108 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-log" containerID="cri-o://f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" gracePeriod=30 Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.430235 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.431194 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" containerID="cri-o://4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229" gracePeriod=30 Feb 02 13:27:16 crc kubenswrapper[4721]: I0202 13:27:16.431282 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" containerID="cri-o://74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203" gracePeriod=30 Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.139623 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.175470 4721 generic.go:334] "Generic (PLEG): container finished" podID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerID="bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" exitCode=0 Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.175779 4721 generic.go:334] "Generic (PLEG): container finished" podID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerID="f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" exitCode=143 Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.175543 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.175549 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerDied","Data":"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b"} Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.177130 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerDied","Data":"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca"} Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.177149 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"136a6410-a20a-4e6a-bbc4-aaa3634e8af7","Type":"ContainerDied","Data":"c6c02957f5278ef86c39592f8b9cb95239b0996358dc335274d947ae3f698b69"} Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.177169 4721 scope.go:117] "RemoveContainer" containerID="bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.179694 4721 generic.go:334] "Generic (PLEG): container finished" podID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerID="4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229" exitCode=143 Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.179781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerDied","Data":"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229"} Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.202718 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.202760 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.202828 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.202861 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rww68\" (UniqueName: \"kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.202938 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.203034 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data\") pod \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\" (UID: \"136a6410-a20a-4e6a-bbc4-aaa3634e8af7\") " Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.204570 4721 scope.go:117] "RemoveContainer" containerID="f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.206555 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs" (OuterVolumeSpecName: "logs") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.209706 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68" (OuterVolumeSpecName: "kube-api-access-rww68") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "kube-api-access-rww68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.242272 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data" (OuterVolumeSpecName: "config-data") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.242412 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.291248 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.295330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "136a6410-a20a-4e6a-bbc4-aaa3634e8af7" (UID: "136a6410-a20a-4e6a-bbc4-aaa3634e8af7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.302102 4721 scope.go:117] "RemoveContainer" containerID="bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.302590 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b\": container with ID starting with bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b not found: ID does not exist" containerID="bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.302636 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b"} err="failed to get container status \"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b\": rpc error: code = NotFound desc = could not find container \"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b\": container with ID starting with bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b not found: ID does not exist" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.302665 4721 scope.go:117] "RemoveContainer" containerID="f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.303014 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca\": container with ID starting with f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca not found: ID does not exist" containerID="f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.303058 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca"} err="failed to get container status \"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca\": rpc error: code = NotFound desc = could not find container \"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca\": container with ID starting with f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca not found: ID does not exist" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.303099 4721 scope.go:117] "RemoveContainer" containerID="bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.303403 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b"} err="failed to get container status \"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b\": rpc error: code = NotFound desc = could not find container \"bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b\": container with ID starting with bf21873a2e828ded4b708082cfc1dfc4d06001ee10f1de31d95e3013760a815b not found: ID does not exist" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.303431 4721 scope.go:117] "RemoveContainer" containerID="f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.303758 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca"} err="failed to get container status \"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca\": rpc error: code = NotFound desc = could not find container \"f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca\": container with ID starting with f756020708f6c293792ee758adf5bf7623dfce1b04f53de592987961836aa5ca not found: ID does not exist" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305330 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305353 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305363 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305374 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305384 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rww68\" (UniqueName: \"kubernetes.io/projected/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-kube-api-access-rww68\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.305393 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/136a6410-a20a-4e6a-bbc4-aaa3634e8af7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.516470 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.531045 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541408 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.541840 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="init" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541856 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="init" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.541872 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-log" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541878 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-log" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.541888 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df0911-fe79-4339-a6fe-cf538f97a247" containerName="nova-manage" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541893 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df0911-fe79-4339-a6fe-cf538f97a247" containerName="nova-manage" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.541917 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-api" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541922 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-api" Feb 02 13:27:17 crc kubenswrapper[4721]: E0202 13:27:17.541947 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="dnsmasq-dns" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.541953 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="dnsmasq-dns" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.542282 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87e6edb-4947-41a9-b95c-5120f9b4dbdc" containerName="dnsmasq-dns" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.542324 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-log" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.542335 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df0911-fe79-4339-a6fe-cf538f97a247" containerName="nova-manage" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.542344 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" containerName="nova-api-api" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.544968 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.548913 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.549179 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.549504 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.561219 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614279 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-config-data\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614328 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614355 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614431 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljpx\" (UniqueName: \"kubernetes.io/projected/8eccca7c-e269-4ecc-9fce-024196f66aaa-kube-api-access-6ljpx\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614496 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eccca7c-e269-4ecc-9fce-024196f66aaa-logs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.614563 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717051 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eccca7c-e269-4ecc-9fce-024196f66aaa-logs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717165 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-config-data\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717313 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.717369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljpx\" (UniqueName: \"kubernetes.io/projected/8eccca7c-e269-4ecc-9fce-024196f66aaa-kube-api-access-6ljpx\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.718236 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eccca7c-e269-4ecc-9fce-024196f66aaa-logs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.721188 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-config-data\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.721335 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.726714 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.729560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eccca7c-e269-4ecc-9fce-024196f66aaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.738916 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljpx\" (UniqueName: \"kubernetes.io/projected/8eccca7c-e269-4ecc-9fce-024196f66aaa-kube-api-access-6ljpx\") pod \"nova-api-0\" (UID: \"8eccca7c-e269-4ecc-9fce-024196f66aaa\") " pod="openstack/nova-api-0" Feb 02 13:27:17 crc kubenswrapper[4721]: I0202 13:27:17.928888 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 13:27:18 crc kubenswrapper[4721]: W0202 13:27:18.382388 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eccca7c_e269_4ecc_9fce_024196f66aaa.slice/crio-1ee255869e6f59cd6cda7d9aa89fb630c94ff69e7e80d9de3b97442d6b610f4e WatchSource:0}: Error finding container 1ee255869e6f59cd6cda7d9aa89fb630c94ff69e7e80d9de3b97442d6b610f4e: Status 404 returned error can't find the container with id 1ee255869e6f59cd6cda7d9aa89fb630c94ff69e7e80d9de3b97442d6b610f4e Feb 02 13:27:18 crc kubenswrapper[4721]: I0202 13:27:18.385497 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 13:27:18 crc kubenswrapper[4721]: I0202 13:27:18.427893 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="136a6410-a20a-4e6a-bbc4-aaa3634e8af7" path="/var/lib/kubelet/pods/136a6410-a20a-4e6a-bbc4-aaa3634e8af7/volumes" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.233817 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eccca7c-e269-4ecc-9fce-024196f66aaa","Type":"ContainerStarted","Data":"3648d6846951dd5e0f3934169f96933d009589baa5c362fe89bdaa232213073c"} Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.234468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eccca7c-e269-4ecc-9fce-024196f66aaa","Type":"ContainerStarted","Data":"b521f058d8fe8ab6771dfc8a3e266876588ba640ad8dda8bf97332321154d3c5"} Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.234481 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8eccca7c-e269-4ecc-9fce-024196f66aaa","Type":"ContainerStarted","Data":"1ee255869e6f59cd6cda7d9aa89fb630c94ff69e7e80d9de3b97442d6b610f4e"} Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.236148 4721 generic.go:334] "Generic (PLEG): container finished" podID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" containerID="5acb431b8e0eb7677b98d89ff66f186391a5d39e1b675d018fe41d3b90c905cd" exitCode=0 Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.236185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1","Type":"ContainerDied","Data":"5acb431b8e0eb7677b98d89ff66f186391a5d39e1b675d018fe41d3b90c905cd"} Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.255242 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.255225421 podStartE2EDuration="2.255225421s" podCreationTimestamp="2026-02-02 13:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:19.253881824 +0000 UTC m=+1579.556396213" watchObservedRunningTime="2026-02-02 13:27:19.255225421 +0000 UTC m=+1579.557739810" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.383038 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.557230 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5wd\" (UniqueName: \"kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd\") pod \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.557492 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle\") pod \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.557526 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data\") pod \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\" (UID: \"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1\") " Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.563895 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd" (OuterVolumeSpecName: "kube-api-access-zb5wd") pod "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" (UID: "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1"). InnerVolumeSpecName "kube-api-access-zb5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.572124 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:47056->10.217.0.254:8775: read: connection reset by peer" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.572167 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:47060->10.217.0.254:8775: read: connection reset by peer" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.595015 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" (UID: "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.597157 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data" (OuterVolumeSpecName: "config-data") pod "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" (UID: "1f591d46-b7ce-4767-987a-bcdaa2f6d3b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.660849 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.660985 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:19 crc kubenswrapper[4721]: I0202 13:27:19.661115 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb5wd\" (UniqueName: \"kubernetes.io/projected/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1-kube-api-access-zb5wd\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.065677 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.170778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle\") pod \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.170997 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs\") pod \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.171036 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data\") pod \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.171574 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs" (OuterVolumeSpecName: "logs") pod "fbfc7ae7-2e8c-4696-a72e-7308794bf726" (UID: "fbfc7ae7-2e8c-4696-a72e-7308794bf726"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.173617 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t22jm\" (UniqueName: \"kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm\") pod \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.174100 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs\") pod \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\" (UID: \"fbfc7ae7-2e8c-4696-a72e-7308794bf726\") " Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.174957 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbfc7ae7-2e8c-4696-a72e-7308794bf726-logs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.177684 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm" (OuterVolumeSpecName: "kube-api-access-t22jm") pod "fbfc7ae7-2e8c-4696-a72e-7308794bf726" (UID: "fbfc7ae7-2e8c-4696-a72e-7308794bf726"). InnerVolumeSpecName "kube-api-access-t22jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.205237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data" (OuterVolumeSpecName: "config-data") pod "fbfc7ae7-2e8c-4696-a72e-7308794bf726" (UID: "fbfc7ae7-2e8c-4696-a72e-7308794bf726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.224954 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbfc7ae7-2e8c-4696-a72e-7308794bf726" (UID: "fbfc7ae7-2e8c-4696-a72e-7308794bf726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.249481 4721 generic.go:334] "Generic (PLEG): container finished" podID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerID="74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203" exitCode=0 Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.249554 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerDied","Data":"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203"} Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.249584 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbfc7ae7-2e8c-4696-a72e-7308794bf726","Type":"ContainerDied","Data":"47ce14db062af36ae7fdbf425398b0057cb5127414ac2231235ece255a18295a"} Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.249602 4721 scope.go:117] "RemoveContainer" containerID="74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.249655 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.252803 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fbfc7ae7-2e8c-4696-a72e-7308794bf726" (UID: "fbfc7ae7-2e8c-4696-a72e-7308794bf726"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.254168 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.254220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f591d46-b7ce-4767-987a-bcdaa2f6d3b1","Type":"ContainerDied","Data":"069121de18c3f33f1304829d8169978a460562815bd16049d235be352ef578bb"} Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.283129 4721 scope.go:117] "RemoveContainer" containerID="4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.283163 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t22jm\" (UniqueName: \"kubernetes.io/projected/fbfc7ae7-2e8c-4696-a72e-7308794bf726-kube-api-access-t22jm\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.283210 4721 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.283231 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.283253 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbfc7ae7-2e8c-4696-a72e-7308794bf726-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.344271 4721 scope.go:117] "RemoveContainer" containerID="74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203" Feb 02 13:27:20 crc kubenswrapper[4721]: E0202 13:27:20.344835 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203\": container with ID starting with 74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203 not found: ID does not exist" containerID="74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.345015 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203"} err="failed to get container status \"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203\": rpc error: code = NotFound desc = could not find container \"74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203\": container with ID starting with 74967a457dae02a2ce8a603739bc30e8cdd7da978739f51561117554fb3ba203 not found: ID does not exist" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.345248 4721 scope.go:117] "RemoveContainer" containerID="4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229" Feb 02 13:27:20 crc kubenswrapper[4721]: E0202 13:27:20.346295 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229\": container with ID starting with 4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229 not found: ID does not exist" containerID="4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.346332 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229"} err="failed to get container status \"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229\": rpc error: code = NotFound desc = could not find container \"4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229\": container with ID starting with 4fd6cf6542770e67b596179be2f1fb73bfe89b859a5b9f3f4406655d59a15229 not found: ID does not exist" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.346357 4721 scope.go:117] "RemoveContainer" containerID="5acb431b8e0eb7677b98d89ff66f186391a5d39e1b675d018fe41d3b90c905cd" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.357603 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.384311 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.396210 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: E0202 13:27:20.396862 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.396891 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" Feb 02 13:27:20 crc kubenswrapper[4721]: E0202 13:27:20.396923 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.396931 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" Feb 02 13:27:20 crc kubenswrapper[4721]: E0202 13:27:20.396965 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" containerName="nova-scheduler-scheduler" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.396973 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" containerName="nova-scheduler-scheduler" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.397281 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-log" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.397302 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" containerName="nova-scheduler-scheduler" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.397315 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" containerName="nova-metadata-metadata" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.398320 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.401574 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.408388 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.426703 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f591d46-b7ce-4767-987a-bcdaa2f6d3b1" path="/var/lib/kubelet/pods/1f591d46-b7ce-4767-987a-bcdaa2f6d3b1/volumes" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.580590 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.590450 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkz2\" (UniqueName: \"kubernetes.io/projected/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-kube-api-access-6kkz2\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.590549 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.591108 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.606769 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.618276 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.620531 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.623630 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.623706 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.628792 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.693732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkz2\" (UniqueName: \"kubernetes.io/projected/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-kube-api-access-6kkz2\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.693807 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.693948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.699816 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.704827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.709045 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkz2\" (UniqueName: \"kubernetes.io/projected/d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728-kube-api-access-6kkz2\") pod \"nova-scheduler-0\" (UID: \"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728\") " pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.720397 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.796307 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4rm\" (UniqueName: \"kubernetes.io/projected/f6e14b26-cab3-4acd-aad2-8cda004e0282-kube-api-access-tc4rm\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.796360 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.796585 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e14b26-cab3-4acd-aad2-8cda004e0282-logs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.796994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-config-data\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.797133 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.899859 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-config-data\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.900030 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.900152 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4rm\" (UniqueName: \"kubernetes.io/projected/f6e14b26-cab3-4acd-aad2-8cda004e0282-kube-api-access-tc4rm\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.900195 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.900338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e14b26-cab3-4acd-aad2-8cda004e0282-logs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.900897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e14b26-cab3-4acd-aad2-8cda004e0282-logs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.904345 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.904973 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.914306 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e14b26-cab3-4acd-aad2-8cda004e0282-config-data\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.916937 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4rm\" (UniqueName: \"kubernetes.io/projected/f6e14b26-cab3-4acd-aad2-8cda004e0282-kube-api-access-tc4rm\") pod \"nova-metadata-0\" (UID: \"f6e14b26-cab3-4acd-aad2-8cda004e0282\") " pod="openstack/nova-metadata-0" Feb 02 13:27:20 crc kubenswrapper[4721]: I0202 13:27:20.987188 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 13:27:21 crc kubenswrapper[4721]: I0202 13:27:21.192367 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 13:27:21 crc kubenswrapper[4721]: W0202 13:27:21.200909 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e7848b_5b3e_4e6b_8c5e_82cd9f2f7728.slice/crio-d8b4dda5ad7a0b3d40cae6755ed625204fa40c0f257aee261f6c7ea873cd124d WatchSource:0}: Error finding container d8b4dda5ad7a0b3d40cae6755ed625204fa40c0f257aee261f6c7ea873cd124d: Status 404 returned error can't find the container with id d8b4dda5ad7a0b3d40cae6755ed625204fa40c0f257aee261f6c7ea873cd124d Feb 02 13:27:21 crc kubenswrapper[4721]: I0202 13:27:21.272513 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728","Type":"ContainerStarted","Data":"d8b4dda5ad7a0b3d40cae6755ed625204fa40c0f257aee261f6c7ea873cd124d"} Feb 02 13:27:21 crc kubenswrapper[4721]: I0202 13:27:21.439793 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 13:27:21 crc kubenswrapper[4721]: W0202 13:27:21.442401 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e14b26_cab3_4acd_aad2_8cda004e0282.slice/crio-a271e0b31b8b5aa1808fbdab19d01a670541e51a63b6f31c199fe2ab6574f8a3 WatchSource:0}: Error finding container a271e0b31b8b5aa1808fbdab19d01a670541e51a63b6f31c199fe2ab6574f8a3: Status 404 returned error can't find the container with id a271e0b31b8b5aa1808fbdab19d01a670541e51a63b6f31c199fe2ab6574f8a3 Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.286427 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6e14b26-cab3-4acd-aad2-8cda004e0282","Type":"ContainerStarted","Data":"ce6a8662a0b8ecab16749cc46d28ed7b9c5b6b3bc69fc300341f3c0fd2e5c384"} Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.286811 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6e14b26-cab3-4acd-aad2-8cda004e0282","Type":"ContainerStarted","Data":"7954bcef1e04f6e445d6281716b387227fbc3e2114d3e27b09b8602be4abf5d1"} Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.286868 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6e14b26-cab3-4acd-aad2-8cda004e0282","Type":"ContainerStarted","Data":"a271e0b31b8b5aa1808fbdab19d01a670541e51a63b6f31c199fe2ab6574f8a3"} Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.288286 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728","Type":"ContainerStarted","Data":"859cdc3d39c0363ca2f661c1cf1914488aa73c81c8570620643ccebc9f320a8f"} Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.310543 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.310518963 podStartE2EDuration="2.310518963s" podCreationTimestamp="2026-02-02 13:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:22.308040076 +0000 UTC m=+1582.610554465" watchObservedRunningTime="2026-02-02 13:27:22.310518963 +0000 UTC m=+1582.613033352" Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.336675 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.336653332 podStartE2EDuration="2.336653332s" podCreationTimestamp="2026-02-02 13:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 13:27:22.328575843 +0000 UTC m=+1582.631090232" watchObservedRunningTime="2026-02-02 13:27:22.336653332 +0000 UTC m=+1582.639167721" Feb 02 13:27:22 crc kubenswrapper[4721]: I0202 13:27:22.424545 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbfc7ae7-2e8c-4696-a72e-7308794bf726" path="/var/lib/kubelet/pods/fbfc7ae7-2e8c-4696-a72e-7308794bf726/volumes" Feb 02 13:27:25 crc kubenswrapper[4721]: I0202 13:27:25.721745 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 13:27:25 crc kubenswrapper[4721]: I0202 13:27:25.988443 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:27:25 crc kubenswrapper[4721]: I0202 13:27:25.988909 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 13:27:27 crc kubenswrapper[4721]: I0202 13:27:27.929845 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:27:27 crc kubenswrapper[4721]: I0202 13:27:27.930196 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 13:27:28 crc kubenswrapper[4721]: I0202 13:27:28.942215 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8eccca7c-e269-4ecc-9fce-024196f66aaa" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:27:28 crc kubenswrapper[4721]: I0202 13:27:28.942230 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8eccca7c-e269-4ecc-9fce-024196f66aaa" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:27:30 crc kubenswrapper[4721]: I0202 13:27:30.721446 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 13:27:30 crc kubenswrapper[4721]: I0202 13:27:30.758452 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 13:27:30 crc kubenswrapper[4721]: I0202 13:27:30.990793 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:27:30 crc kubenswrapper[4721]: I0202 13:27:30.991482 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 13:27:31 crc kubenswrapper[4721]: I0202 13:27:31.447668 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 13:27:32 crc kubenswrapper[4721]: I0202 13:27:32.007275 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6e14b26-cab3-4acd-aad2-8cda004e0282" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:27:32 crc kubenswrapper[4721]: I0202 13:27:32.007292 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f6e14b26-cab3-4acd-aad2-8cda004e0282" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 13:27:36 crc kubenswrapper[4721]: I0202 13:27:36.218034 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 13:27:37 crc kubenswrapper[4721]: I0202 13:27:37.946710 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:27:37 crc kubenswrapper[4721]: I0202 13:27:37.949513 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:27:37 crc kubenswrapper[4721]: I0202 13:27:37.953332 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 13:27:37 crc kubenswrapper[4721]: I0202 13:27:37.959472 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:27:38 crc kubenswrapper[4721]: I0202 13:27:38.515179 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 13:27:38 crc kubenswrapper[4721]: I0202 13:27:38.521446 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 13:27:40 crc kubenswrapper[4721]: I0202 13:27:40.997636 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:27:40 crc kubenswrapper[4721]: I0202 13:27:40.999113 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 13:27:41 crc kubenswrapper[4721]: I0202 13:27:41.003935 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:27:41 crc kubenswrapper[4721]: I0202 13:27:41.552204 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 13:28:14 crc kubenswrapper[4721]: I0202 13:28:14.763464 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:28:14 crc kubenswrapper[4721]: I0202 13:28:14.764112 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:28:44 crc kubenswrapper[4721]: I0202 13:28:44.763195 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:28:44 crc kubenswrapper[4721]: I0202 13:28:44.763755 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:29:09 crc kubenswrapper[4721]: I0202 13:29:09.949253 4721 scope.go:117] "RemoveContainer" containerID="6404bb6f1951c91af9f8453984cfaf480f6307bbda26d7f2b85d0b4cee4e2109" Feb 02 13:29:09 crc kubenswrapper[4721]: I0202 13:29:09.985516 4721 scope.go:117] "RemoveContainer" containerID="80fd8d4c523d759364a503f4e0957e812daf0265c52b4e63f92c47e96ac7e275" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.016378 4721 scope.go:117] "RemoveContainer" containerID="a4049e92c383c0eb65178e6eb81956222b0a85112475c87886800360320c1322" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.106502 4721 scope.go:117] "RemoveContainer" containerID="ebcd143f9cd75602d2879409dcec3b4694439187ff4c0fda35cb07bd211f9634" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.182316 4721 scope.go:117] "RemoveContainer" containerID="53b5d624776f4223952c5574dd921abb5bb1d5c538eeb2e730f20d491cd8ec2a" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.260568 4721 scope.go:117] "RemoveContainer" containerID="d0477d4beedf8835ceccc8981c1de2a9fe8aa3519682eb80e7972c0762297343" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.293365 4721 scope.go:117] "RemoveContainer" containerID="c8ac03b5a6a963dc432f18a1012252ac15cbdaf5a852eb90b3130207aa267b95" Feb 02 13:29:10 crc kubenswrapper[4721]: I0202 13:29:10.347853 4721 scope.go:117] "RemoveContainer" containerID="74a99b13280ba5e058fb97f392a9a2baa22e1224fb962f08950b59d7a1606135" Feb 02 13:29:14 crc kubenswrapper[4721]: I0202 13:29:14.763473 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:29:14 crc kubenswrapper[4721]: I0202 13:29:14.764148 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:29:14 crc kubenswrapper[4721]: I0202 13:29:14.764204 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:29:14 crc kubenswrapper[4721]: I0202 13:29:14.765158 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:29:14 crc kubenswrapper[4721]: I0202 13:29:14.765206 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" gracePeriod=600 Feb 02 13:29:14 crc kubenswrapper[4721]: E0202 13:29:14.892561 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:29:15 crc kubenswrapper[4721]: I0202 13:29:15.634435 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" exitCode=0 Feb 02 13:29:15 crc kubenswrapper[4721]: I0202 13:29:15.634496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c"} Feb 02 13:29:15 crc kubenswrapper[4721]: I0202 13:29:15.634823 4721 scope.go:117] "RemoveContainer" containerID="4c89d3977af7fbb3779c0661dadea0111ac2d8f3c3974c534b682ad6a4af4aac" Feb 02 13:29:15 crc kubenswrapper[4721]: I0202 13:29:15.635726 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:29:15 crc kubenswrapper[4721]: E0202 13:29:15.636286 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:29:27 crc kubenswrapper[4721]: I0202 13:29:27.410019 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:29:27 crc kubenswrapper[4721]: E0202 13:29:27.410796 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:29:38 crc kubenswrapper[4721]: I0202 13:29:38.412583 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:29:38 crc kubenswrapper[4721]: E0202 13:29:38.416000 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:29:51 crc kubenswrapper[4721]: I0202 13:29:51.409908 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:29:51 crc kubenswrapper[4721]: E0202 13:29:51.410638 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.169269 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl"] Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.171454 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.173741 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.173891 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.181951 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl"] Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.270951 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.271068 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.271147 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5jt\" (UniqueName: \"kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.373588 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.373966 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.374139 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk5jt\" (UniqueName: \"kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.375073 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.379421 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.396569 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk5jt\" (UniqueName: \"kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt\") pod \"collect-profiles-29500650-fclwl\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.547442 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:30:00 crc kubenswrapper[4721]: I0202 13:30:00.555685 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:01 crc kubenswrapper[4721]: I0202 13:30:01.034913 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl"] Feb 02 13:30:01 crc kubenswrapper[4721]: W0202 13:30:01.037025 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19b4436_4c9b_4671_acef_1ba5685cb660.slice/crio-34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5 WatchSource:0}: Error finding container 34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5: Status 404 returned error can't find the container with id 34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5 Feb 02 13:30:01 crc kubenswrapper[4721]: I0202 13:30:01.184017 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" event={"ID":"d19b4436-4c9b-4671-acef-1ba5685cb660","Type":"ContainerStarted","Data":"34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5"} Feb 02 13:30:02 crc kubenswrapper[4721]: I0202 13:30:02.199546 4721 generic.go:334] "Generic (PLEG): container finished" podID="d19b4436-4c9b-4671-acef-1ba5685cb660" containerID="5d8115a3c44a297e5941de9c7ae62ed0d1533603d2bcff7cfc2aadd64924c9b1" exitCode=0 Feb 02 13:30:02 crc kubenswrapper[4721]: I0202 13:30:02.199654 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" event={"ID":"d19b4436-4c9b-4671-acef-1ba5685cb660","Type":"ContainerDied","Data":"5d8115a3c44a297e5941de9c7ae62ed0d1533603d2bcff7cfc2aadd64924c9b1"} Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.599597 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.652878 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk5jt\" (UniqueName: \"kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt\") pod \"d19b4436-4c9b-4671-acef-1ba5685cb660\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.653156 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume\") pod \"d19b4436-4c9b-4671-acef-1ba5685cb660\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.653314 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume\") pod \"d19b4436-4c9b-4671-acef-1ba5685cb660\" (UID: \"d19b4436-4c9b-4671-acef-1ba5685cb660\") " Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.654618 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume" (OuterVolumeSpecName: "config-volume") pod "d19b4436-4c9b-4671-acef-1ba5685cb660" (UID: "d19b4436-4c9b-4671-acef-1ba5685cb660"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.660692 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt" (OuterVolumeSpecName: "kube-api-access-zk5jt") pod "d19b4436-4c9b-4671-acef-1ba5685cb660" (UID: "d19b4436-4c9b-4671-acef-1ba5685cb660"). InnerVolumeSpecName "kube-api-access-zk5jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.661246 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d19b4436-4c9b-4671-acef-1ba5685cb660" (UID: "d19b4436-4c9b-4671-acef-1ba5685cb660"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.757939 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19b4436-4c9b-4671-acef-1ba5685cb660-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.757969 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk5jt\" (UniqueName: \"kubernetes.io/projected/d19b4436-4c9b-4671-acef-1ba5685cb660-kube-api-access-zk5jt\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:03 crc kubenswrapper[4721]: I0202 13:30:03.757979 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19b4436-4c9b-4671-acef-1ba5685cb660-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:30:04 crc kubenswrapper[4721]: I0202 13:30:04.221851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" event={"ID":"d19b4436-4c9b-4671-acef-1ba5685cb660","Type":"ContainerDied","Data":"34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5"} Feb 02 13:30:04 crc kubenswrapper[4721]: I0202 13:30:04.222224 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a0f251bb9e78b0bfa37fe20d6c84f3dbcc2bc274affa0b31a69e126f8a70a5" Feb 02 13:30:04 crc kubenswrapper[4721]: I0202 13:30:04.221899 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl" Feb 02 13:30:06 crc kubenswrapper[4721]: I0202 13:30:06.409820 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:30:06 crc kubenswrapper[4721]: E0202 13:30:06.410738 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:30:10 crc kubenswrapper[4721]: I0202 13:30:10.571457 4721 scope.go:117] "RemoveContainer" containerID="59e47087e25d7a69cc9b0e24b51c0193c1d130de3a6fbb82bf929574bc9e38b6" Feb 02 13:30:19 crc kubenswrapper[4721]: I0202 13:30:19.409860 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:30:19 crc kubenswrapper[4721]: E0202 13:30:19.410631 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:30:34 crc kubenswrapper[4721]: I0202 13:30:34.410834 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:30:34 crc kubenswrapper[4721]: E0202 13:30:34.411970 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:30:47 crc kubenswrapper[4721]: I0202 13:30:47.409845 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:30:47 crc kubenswrapper[4721]: E0202 13:30:47.410811 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:31:00 crc kubenswrapper[4721]: I0202 13:31:00.429549 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:31:00 crc kubenswrapper[4721]: E0202 13:31:00.432956 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:31:13 crc kubenswrapper[4721]: I0202 13:31:13.411032 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:31:13 crc kubenswrapper[4721]: E0202 13:31:13.412111 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:31:25 crc kubenswrapper[4721]: I0202 13:31:25.414943 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:31:25 crc kubenswrapper[4721]: E0202 13:31:25.416279 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:31:36 crc kubenswrapper[4721]: I0202 13:31:36.409752 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:31:36 crc kubenswrapper[4721]: E0202 13:31:36.410641 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:31:51 crc kubenswrapper[4721]: I0202 13:31:51.410084 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:31:51 crc kubenswrapper[4721]: E0202 13:31:51.410933 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:32:02 crc kubenswrapper[4721]: I0202 13:32:02.411477 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:32:02 crc kubenswrapper[4721]: E0202 13:32:02.412721 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:32:15 crc kubenswrapper[4721]: I0202 13:32:15.410353 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:32:15 crc kubenswrapper[4721]: E0202 13:32:15.411606 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:32:30 crc kubenswrapper[4721]: I0202 13:32:30.419058 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:32:30 crc kubenswrapper[4721]: E0202 13:32:30.419840 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.069300 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-04a7-account-create-update-xhlq8"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.080375 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4d7hn"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.091971 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sl4gx"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.102195 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7e-account-create-update-7jmk5"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.112047 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-04a7-account-create-update-xhlq8"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.121800 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sl4gx"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.131726 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d7e-account-create-update-7jmk5"] Feb 02 13:32:31 crc kubenswrapper[4721]: I0202 13:32:31.141061 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-4d7hn"] Feb 02 13:32:32 crc kubenswrapper[4721]: I0202 13:32:32.424323 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1ef9e5-26ab-4b7b-b255-73968ed867ce" path="/var/lib/kubelet/pods/4e1ef9e5-26ab-4b7b-b255-73968ed867ce/volumes" Feb 02 13:32:32 crc kubenswrapper[4721]: I0202 13:32:32.427711 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b74e699-bc4f-4415-a9dc-8ad52d916bc0" path="/var/lib/kubelet/pods/8b74e699-bc4f-4415-a9dc-8ad52d916bc0/volumes" Feb 02 13:32:32 crc kubenswrapper[4721]: I0202 13:32:32.430686 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d46c6f8-aff0-4b28-a71b-d98a894afdaf" path="/var/lib/kubelet/pods/9d46c6f8-aff0-4b28-a71b-d98a894afdaf/volumes" Feb 02 13:32:32 crc kubenswrapper[4721]: I0202 13:32:32.431999 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e72b39-6085-4753-8b7d-a93a80c95d49" path="/var/lib/kubelet/pods/b5e72b39-6085-4753-8b7d-a93a80c95d49/volumes" Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.056462 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b945b"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.071305 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e588-account-create-update-4crm9"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.082580 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b945b"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.102760 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e588-account-create-update-4crm9"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.123827 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4msnh"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.131436 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4msnh"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.144363 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f5fd-account-create-update-8s8md"] Feb 02 13:32:35 crc kubenswrapper[4721]: I0202 13:32:35.159579 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f5fd-account-create-update-8s8md"] Feb 02 13:32:36 crc kubenswrapper[4721]: I0202 13:32:36.422648 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd" path="/var/lib/kubelet/pods/3d4fa2b7-60f7-4bca-b4a6-751dc5f853dd/volumes" Feb 02 13:32:36 crc kubenswrapper[4721]: I0202 13:32:36.428234 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8261a2f3-c66a-441c-9fc6-a7a6a744b8a3" path="/var/lib/kubelet/pods/8261a2f3-c66a-441c-9fc6-a7a6a744b8a3/volumes" Feb 02 13:32:36 crc kubenswrapper[4721]: I0202 13:32:36.430954 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51234ae-bf99-49bc-a3bc-1b392f993726" path="/var/lib/kubelet/pods/d51234ae-bf99-49bc-a3bc-1b392f993726/volumes" Feb 02 13:32:36 crc kubenswrapper[4721]: I0202 13:32:36.433213 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e071a9e9-d1fa-41c2-a0b4-3ddc2470055b" path="/var/lib/kubelet/pods/e071a9e9-d1fa-41c2-a0b4-3ddc2470055b/volumes" Feb 02 13:32:41 crc kubenswrapper[4721]: I0202 13:32:41.078672 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5"] Feb 02 13:32:41 crc kubenswrapper[4721]: I0202 13:32:41.092403 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ed80-account-create-update-w8c4k"] Feb 02 13:32:41 crc kubenswrapper[4721]: I0202 13:32:41.101965 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4hfg5"] Feb 02 13:32:41 crc kubenswrapper[4721]: I0202 13:32:41.112271 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ed80-account-create-update-w8c4k"] Feb 02 13:32:42 crc kubenswrapper[4721]: I0202 13:32:42.443658 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af979c8-207f-455c-b383-fd22b1ec6758" path="/var/lib/kubelet/pods/0af979c8-207f-455c-b383-fd22b1ec6758/volumes" Feb 02 13:32:42 crc kubenswrapper[4721]: I0202 13:32:42.445660 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1f70a8-6b41-4823-991b-934510a608fd" path="/var/lib/kubelet/pods/5b1f70a8-6b41-4823-991b-934510a608fd/volumes" Feb 02 13:32:45 crc kubenswrapper[4721]: I0202 13:32:45.410395 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:32:45 crc kubenswrapper[4721]: E0202 13:32:45.411167 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:32:58 crc kubenswrapper[4721]: I0202 13:32:58.454582 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:32:58 crc kubenswrapper[4721]: E0202 13:32:58.455605 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.048591 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wffvl"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.064188 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9865-account-create-update-5xd7v"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.079830 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wffvl"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.091386 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9865-account-create-update-5xd7v"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.100996 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-xmp7t"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.112311 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-xmp7t"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.123215 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2whnq"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.137307 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2whnq"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.155136 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-219b-account-create-update-c48ml"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.165531 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-15d5-account-create-update-5kl6r"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.176640 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-219b-account-create-update-c48ml"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.190231 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-15d5-account-create-update-5kl6r"] Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.435300 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13666544-a226-43ee-84c9-3232e9fff8d4" path="/var/lib/kubelet/pods/13666544-a226-43ee-84c9-3232e9fff8d4/volumes" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.436564 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375b0aad-b921-41d8-af30-181ac4a73c0b" path="/var/lib/kubelet/pods/375b0aad-b921-41d8-af30-181ac4a73c0b/volumes" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.437728 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b707f0-c9cf-46b5-b615-4c0ab1da0391" path="/var/lib/kubelet/pods/46b707f0-c9cf-46b5-b615-4c0ab1da0391/volumes" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.438614 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bc4821-8b8e-4972-a90e-67a7a7b1fee5" path="/var/lib/kubelet/pods/51bc4821-8b8e-4972-a90e-67a7a7b1fee5/volumes" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.440028 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f56b66-72ae-4c95-8051-dc5f7a0faec4" path="/var/lib/kubelet/pods/67f56b66-72ae-4c95-8051-dc5f7a0faec4/volumes" Feb 02 13:33:04 crc kubenswrapper[4721]: I0202 13:33:04.441243 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937d142a-7868-4de2-85f3-90dcc5a74019" path="/var/lib/kubelet/pods/937d142a-7868-4de2-85f3-90dcc5a74019/volumes" Feb 02 13:33:05 crc kubenswrapper[4721]: I0202 13:33:05.052930 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-6f5e-account-create-update-nk8v6"] Feb 02 13:33:05 crc kubenswrapper[4721]: I0202 13:33:05.067457 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xs4g5"] Feb 02 13:33:05 crc kubenswrapper[4721]: I0202 13:33:05.078608 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xs4g5"] Feb 02 13:33:05 crc kubenswrapper[4721]: I0202 13:33:05.090508 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-6f5e-account-create-update-nk8v6"] Feb 02 13:33:06 crc kubenswrapper[4721]: I0202 13:33:06.425869 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d74a2f0-9f60-4f59-92e4-11b9136f1db5" path="/var/lib/kubelet/pods/0d74a2f0-9f60-4f59-92e4-11b9136f1db5/volumes" Feb 02 13:33:06 crc kubenswrapper[4721]: I0202 13:33:06.427568 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ad4533-c6a5-49da-8f33-23113f8b7fea" path="/var/lib/kubelet/pods/45ad4533-c6a5-49da-8f33-23113f8b7fea/volumes" Feb 02 13:33:09 crc kubenswrapper[4721]: I0202 13:33:09.409508 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:33:09 crc kubenswrapper[4721]: E0202 13:33:09.410159 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.721314 4721 scope.go:117] "RemoveContainer" containerID="c60a5b8d8e8a5c3310b5b5e67ee44c42a8f4c0e6c7827776fd046304bab7b307" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.770082 4721 scope.go:117] "RemoveContainer" containerID="c036ed84a2cba404110a4db04b8c7d0f021199196a70d367772128ca1a327056" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.828178 4721 scope.go:117] "RemoveContainer" containerID="3fc0efffdb16822dab9442b0b34ece997fbe26ebe5808aadd9adb66a693a7dd5" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.881478 4721 scope.go:117] "RemoveContainer" containerID="ac2082a5d3a7b825797912c8d9660423dc0e0e1d5b6ff60e8c46690201c145fc" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.911591 4721 scope.go:117] "RemoveContainer" containerID="075500f50301a059fcb64d3fd73cd025d86472e12f54ce995ede7cc3876a10cc" Feb 02 13:33:10 crc kubenswrapper[4721]: I0202 13:33:10.974988 4721 scope.go:117] "RemoveContainer" containerID="4b856042309c84ea0a6e8c400ccf627a4542f6766de95577e1154ef8996d41d0" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.032435 4721 scope.go:117] "RemoveContainer" containerID="fdd34df09b38c8d4122669f8f4cba7de1072b0822ab1da7e4f47ca0b7bdd4576" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.053883 4721 scope.go:117] "RemoveContainer" containerID="c6d0cc979d5c7bfcd7c17e38f66db5aa66eb2098b9d5dff2ff5da7fb49088c43" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.122061 4721 scope.go:117] "RemoveContainer" containerID="4fba6951bb982a13a5360303ec96e98896da4f493023d6f3bda466f64f4a3da5" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.148561 4721 scope.go:117] "RemoveContainer" containerID="941cea547a9c6139f4cd34b3cba1a9232469327b41a70627da4554d99e83c28b" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.170221 4721 scope.go:117] "RemoveContainer" containerID="17ddfbe07f4d8bd38ac75c2dd4cd30a97224663bc355d41b2756657171654039" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.221260 4721 scope.go:117] "RemoveContainer" containerID="49e2f9d6a9f5b04c1ec533b19afe36a66018a912e5ef184f9b92ab178816de33" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.249511 4721 scope.go:117] "RemoveContainer" containerID="ce2892d88a036a25fa1c546c389b3e2c80e44cee3ffdb13ac0e6fe0ce93c414d" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.271599 4721 scope.go:117] "RemoveContainer" containerID="fa62cc31d8fc9109f8c7236f7067b1ae22093077c72a6872a6dc77d5cf6674c5" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.293389 4721 scope.go:117] "RemoveContainer" containerID="2e794fb911c91ba362224447e9ecd49d098fbbb2f1fb15a45aedc118537561bc" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.315516 4721 scope.go:117] "RemoveContainer" containerID="2e42426b62e4be73df9f47cc7c9f8475c22ad72a27299b42b9cb2460af93ca8b" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.344547 4721 scope.go:117] "RemoveContainer" containerID="b4ecf1fb05394c16a116b372fdffdfa0e7375e6cf9e5a5e825266b2a826c68fd" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.375181 4721 scope.go:117] "RemoveContainer" containerID="11d4406ea2aeef5800b1d48d5c16350e8f64df4bb7e540c2b8bb59f164e7298a" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.403038 4721 scope.go:117] "RemoveContainer" containerID="27b5beb216b40a6bf47c26d5501508f64c30a9831dd1df313dcf922bbdd6bfbf" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.430795 4721 scope.go:117] "RemoveContainer" containerID="041fca898cdc3bff357d1c5b88b2d8189fb9511b9b78d7578e5238edecd243a2" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.460564 4721 scope.go:117] "RemoveContainer" containerID="ebaaea23221e663dcf2fea49186c159f19607ea1b1253a2c8f769a54c895b470" Feb 02 13:33:11 crc kubenswrapper[4721]: I0202 13:33:11.516528 4721 scope.go:117] "RemoveContainer" containerID="786d3f2324d84717b8e858d54067e88e6ae7b7e91f7a2cafa3e176317085dd6f" Feb 02 13:33:18 crc kubenswrapper[4721]: I0202 13:33:18.066178 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wvp2n"] Feb 02 13:33:18 crc kubenswrapper[4721]: I0202 13:33:18.078837 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wvp2n"] Feb 02 13:33:18 crc kubenswrapper[4721]: I0202 13:33:18.425448 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3395efa1-7b43-4b48-9e06-764b9428c5ab" path="/var/lib/kubelet/pods/3395efa1-7b43-4b48-9e06-764b9428c5ab/volumes" Feb 02 13:33:20 crc kubenswrapper[4721]: I0202 13:33:20.424713 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:33:20 crc kubenswrapper[4721]: E0202 13:33:20.426322 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:33:24 crc kubenswrapper[4721]: I0202 13:33:24.039377 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h675n"] Feb 02 13:33:24 crc kubenswrapper[4721]: I0202 13:33:24.055057 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h675n"] Feb 02 13:33:24 crc kubenswrapper[4721]: I0202 13:33:24.429585 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef45b1-9ff2-40ca-950a-07746f51eca9" path="/var/lib/kubelet/pods/71ef45b1-9ff2-40ca-950a-07746f51eca9/volumes" Feb 02 13:33:28 crc kubenswrapper[4721]: I0202 13:33:28.041927 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hrqtc"] Feb 02 13:33:28 crc kubenswrapper[4721]: I0202 13:33:28.060312 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hrqtc"] Feb 02 13:33:28 crc kubenswrapper[4721]: I0202 13:33:28.424871 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0531b398-2d44-42c2-bd6c-9e9f7ab8c85d" path="/var/lib/kubelet/pods/0531b398-2d44-42c2-bd6c-9e9f7ab8c85d/volumes" Feb 02 13:33:35 crc kubenswrapper[4721]: I0202 13:33:35.409986 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:33:35 crc kubenswrapper[4721]: E0202 13:33:35.410662 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:33:49 crc kubenswrapper[4721]: I0202 13:33:49.410351 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:33:49 crc kubenswrapper[4721]: E0202 13:33:49.411279 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:33:50 crc kubenswrapper[4721]: I0202 13:33:50.066631 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2tnbk"] Feb 02 13:33:50 crc kubenswrapper[4721]: I0202 13:33:50.079211 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2tnbk"] Feb 02 13:33:50 crc kubenswrapper[4721]: I0202 13:33:50.422858 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026bbe7a-aec9-40ee-9be3-cdb35054e076" path="/var/lib/kubelet/pods/026bbe7a-aec9-40ee-9be3-cdb35054e076/volumes" Feb 02 13:33:57 crc kubenswrapper[4721]: I0202 13:33:57.029794 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-86z2v"] Feb 02 13:33:57 crc kubenswrapper[4721]: I0202 13:33:57.039837 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-86z2v"] Feb 02 13:33:58 crc kubenswrapper[4721]: I0202 13:33:58.432439 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd67c16-7130-4095-952f-006aa5bcd5bb" path="/var/lib/kubelet/pods/bdd67c16-7130-4095-952f-006aa5bcd5bb/volumes" Feb 02 13:34:03 crc kubenswrapper[4721]: I0202 13:34:03.410600 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:34:03 crc kubenswrapper[4721]: E0202 13:34:03.411377 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:34:04 crc kubenswrapper[4721]: I0202 13:34:04.031994 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dw7nl"] Feb 02 13:34:04 crc kubenswrapper[4721]: I0202 13:34:04.043682 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dw7nl"] Feb 02 13:34:04 crc kubenswrapper[4721]: I0202 13:34:04.425908 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d168e414-ab7e-45ad-b142-25dcc1c359b0" path="/var/lib/kubelet/pods/d168e414-ab7e-45ad-b142-25dcc1c359b0/volumes" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.015355 4721 scope.go:117] "RemoveContainer" containerID="e7d18b6bc119712cab584a04197b6912f210c5505b0b18fd032278cec2f8f3b5" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.057574 4721 scope.go:117] "RemoveContainer" containerID="08a73cbae26287f30a607e9d3bb9b367097d0316dac19c52bb303a922febd87c" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.112865 4721 scope.go:117] "RemoveContainer" containerID="3705d645077158cd12edf8f0f9b5a39f0ba95d5854f57c056a964be7f2bc24c9" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.162949 4721 scope.go:117] "RemoveContainer" containerID="759c834b6a1cc62188124483c6831d2bab037f76c9aac624de4118e2066fe35a" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.267532 4721 scope.go:117] "RemoveContainer" containerID="3bcf52e41be39651ef275074658dfd7b224ff525c369aa709ad250ff99c12eb1" Feb 02 13:34:12 crc kubenswrapper[4721]: I0202 13:34:12.299576 4721 scope.go:117] "RemoveContainer" containerID="4fe33e5a77c87b36e6e67a1b771c77b2a949e44a54e8c2f88295e47c2b68d215" Feb 02 13:34:16 crc kubenswrapper[4721]: I0202 13:34:16.410435 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:34:17 crc kubenswrapper[4721]: I0202 13:34:17.119672 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f"} Feb 02 13:34:18 crc kubenswrapper[4721]: I0202 13:34:18.061197 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cgqfl"] Feb 02 13:34:18 crc kubenswrapper[4721]: I0202 13:34:18.082202 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cgqfl"] Feb 02 13:34:18 crc kubenswrapper[4721]: I0202 13:34:18.424870 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a4176b-5f58-47a9-a614-e5d05526da18" path="/var/lib/kubelet/pods/47a4176b-5f58-47a9-a614-e5d05526da18/volumes" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.487692 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:34:20 crc kubenswrapper[4721]: E0202 13:34:20.489235 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19b4436-4c9b-4671-acef-1ba5685cb660" containerName="collect-profiles" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.489260 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19b4436-4c9b-4671-acef-1ba5685cb660" containerName="collect-profiles" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.489635 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19b4436-4c9b-4671-acef-1ba5685cb660" containerName="collect-profiles" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.492524 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.513493 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.588742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phmt\" (UniqueName: \"kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.588828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.589269 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.691946 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phmt\" (UniqueName: \"kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.692025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.692201 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.692669 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.692683 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.713136 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phmt\" (UniqueName: \"kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt\") pod \"redhat-operators-fmst6\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:20 crc kubenswrapper[4721]: I0202 13:34:20.816132 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.098157 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.100895 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.111989 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.220505 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbp4d\" (UniqueName: \"kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.220672 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.220804 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.322730 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.322855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbp4d\" (UniqueName: \"kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.322936 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.323508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.323535 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.357056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbp4d\" (UniqueName: \"kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d\") pod \"community-operators-84s2n\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.431802 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:21 crc kubenswrapper[4721]: W0202 13:34:21.434997 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8972ee68_f222_49f3_8c06_5ba78388a6cd.slice/crio-7d900a7ae12137341c4c93ae85dcbc015a3fb6c748d34ccdb600fe258145bb48 WatchSource:0}: Error finding container 7d900a7ae12137341c4c93ae85dcbc015a3fb6c748d34ccdb600fe258145bb48: Status 404 returned error can't find the container with id 7d900a7ae12137341c4c93ae85dcbc015a3fb6c748d34ccdb600fe258145bb48 Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.439669 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:34:21 crc kubenswrapper[4721]: I0202 13:34:21.973005 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:22 crc kubenswrapper[4721]: I0202 13:34:22.211869 4721 generic.go:334] "Generic (PLEG): container finished" podID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerID="f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9" exitCode=0 Feb 02 13:34:22 crc kubenswrapper[4721]: I0202 13:34:22.211953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerDied","Data":"f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9"} Feb 02 13:34:22 crc kubenswrapper[4721]: I0202 13:34:22.211984 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerStarted","Data":"7d900a7ae12137341c4c93ae85dcbc015a3fb6c748d34ccdb600fe258145bb48"} Feb 02 13:34:22 crc kubenswrapper[4721]: I0202 13:34:22.217526 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerStarted","Data":"1a886770e742da309dc26755bb066650eb49c943ca412e90da6eb83afff91090"} Feb 02 13:34:23 crc kubenswrapper[4721]: I0202 13:34:23.234970 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerID="7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b" exitCode=0 Feb 02 13:34:23 crc kubenswrapper[4721]: I0202 13:34:23.235037 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerDied","Data":"7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b"} Feb 02 13:34:23 crc kubenswrapper[4721]: I0202 13:34:23.238924 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:34:24 crc kubenswrapper[4721]: I0202 13:34:24.249108 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerStarted","Data":"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c"} Feb 02 13:34:25 crc kubenswrapper[4721]: I0202 13:34:25.270368 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerStarted","Data":"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27"} Feb 02 13:34:27 crc kubenswrapper[4721]: I0202 13:34:27.029350 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-n52pp"] Feb 02 13:34:27 crc kubenswrapper[4721]: I0202 13:34:27.042457 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-n52pp"] Feb 02 13:34:27 crc kubenswrapper[4721]: I0202 13:34:27.305031 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerID="3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c" exitCode=0 Feb 02 13:34:27 crc kubenswrapper[4721]: I0202 13:34:27.305105 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerDied","Data":"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c"} Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.044651 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7wjxh"] Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.065756 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7wjxh"] Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.316015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerStarted","Data":"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33"} Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.370430 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84s2n" podStartSLOduration=2.873681278 podStartE2EDuration="7.3704104s" podCreationTimestamp="2026-02-02 13:34:21 +0000 UTC" firstStartedPulling="2026-02-02 13:34:23.238563128 +0000 UTC m=+2003.541077517" lastFinishedPulling="2026-02-02 13:34:27.73529225 +0000 UTC m=+2008.037806639" observedRunningTime="2026-02-02 13:34:28.364775587 +0000 UTC m=+2008.667289986" watchObservedRunningTime="2026-02-02 13:34:28.3704104 +0000 UTC m=+2008.672924789" Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.424506 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa244a8-7588-4d87-bd5b-cbcd10780c83" path="/var/lib/kubelet/pods/9fa244a8-7588-4d87-bd5b-cbcd10780c83/volumes" Feb 02 13:34:28 crc kubenswrapper[4721]: I0202 13:34:28.425365 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3578ef-5d1b-4c52-939c-237feadc1c5c" path="/var/lib/kubelet/pods/ad3578ef-5d1b-4c52-939c-237feadc1c5c/volumes" Feb 02 13:34:31 crc kubenswrapper[4721]: I0202 13:34:31.369773 4721 generic.go:334] "Generic (PLEG): container finished" podID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerID="d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27" exitCode=0 Feb 02 13:34:31 crc kubenswrapper[4721]: I0202 13:34:31.369977 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerDied","Data":"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27"} Feb 02 13:34:31 crc kubenswrapper[4721]: I0202 13:34:31.433422 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:31 crc kubenswrapper[4721]: I0202 13:34:31.433462 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:32 crc kubenswrapper[4721]: I0202 13:34:32.383221 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerStarted","Data":"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763"} Feb 02 13:34:32 crc kubenswrapper[4721]: I0202 13:34:32.415670 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fmst6" podStartSLOduration=3.748827215 podStartE2EDuration="12.415650316s" podCreationTimestamp="2026-02-02 13:34:20 +0000 UTC" firstStartedPulling="2026-02-02 13:34:23.23864753 +0000 UTC m=+2003.541161959" lastFinishedPulling="2026-02-02 13:34:31.905470631 +0000 UTC m=+2012.207985060" observedRunningTime="2026-02-02 13:34:32.413793195 +0000 UTC m=+2012.716307594" watchObservedRunningTime="2026-02-02 13:34:32.415650316 +0000 UTC m=+2012.718164705" Feb 02 13:34:32 crc kubenswrapper[4721]: I0202 13:34:32.494299 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-84s2n" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" probeResult="failure" output=< Feb 02 13:34:32 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:34:32 crc kubenswrapper[4721]: > Feb 02 13:34:40 crc kubenswrapper[4721]: I0202 13:34:40.817099 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:40 crc kubenswrapper[4721]: I0202 13:34:40.817685 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:34:41 crc kubenswrapper[4721]: I0202 13:34:41.887745 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fmst6" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" probeResult="failure" output=< Feb 02 13:34:41 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:34:41 crc kubenswrapper[4721]: > Feb 02 13:34:42 crc kubenswrapper[4721]: I0202 13:34:42.503661 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-84s2n" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" probeResult="failure" output=< Feb 02 13:34:42 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:34:42 crc kubenswrapper[4721]: > Feb 02 13:34:51 crc kubenswrapper[4721]: I0202 13:34:51.480002 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:51 crc kubenswrapper[4721]: I0202 13:34:51.533989 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:51 crc kubenswrapper[4721]: I0202 13:34:51.740961 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:51 crc kubenswrapper[4721]: I0202 13:34:51.897812 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fmst6" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" probeResult="failure" output=< Feb 02 13:34:51 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:34:51 crc kubenswrapper[4721]: > Feb 02 13:34:52 crc kubenswrapper[4721]: I0202 13:34:52.613481 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84s2n" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" containerID="cri-o://cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33" gracePeriod=2 Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.438640 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.580352 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content\") pod \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.580870 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities\") pod \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.581006 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbp4d\" (UniqueName: \"kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d\") pod \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\" (UID: \"0d44a404-e01d-4a19-a103-58cbbafbdc7b\") " Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.582847 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities" (OuterVolumeSpecName: "utilities") pod "0d44a404-e01d-4a19-a103-58cbbafbdc7b" (UID: "0d44a404-e01d-4a19-a103-58cbbafbdc7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.603321 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d" (OuterVolumeSpecName: "kube-api-access-zbp4d") pod "0d44a404-e01d-4a19-a103-58cbbafbdc7b" (UID: "0d44a404-e01d-4a19-a103-58cbbafbdc7b"). InnerVolumeSpecName "kube-api-access-zbp4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.629314 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerID="cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33" exitCode=0 Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.629363 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerDied","Data":"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33"} Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.629387 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84s2n" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.629409 4721 scope.go:117] "RemoveContainer" containerID="cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.629394 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84s2n" event={"ID":"0d44a404-e01d-4a19-a103-58cbbafbdc7b","Type":"ContainerDied","Data":"1a886770e742da309dc26755bb066650eb49c943ca412e90da6eb83afff91090"} Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.635090 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d44a404-e01d-4a19-a103-58cbbafbdc7b" (UID: "0d44a404-e01d-4a19-a103-58cbbafbdc7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.684270 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.684310 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d44a404-e01d-4a19-a103-58cbbafbdc7b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.684325 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbp4d\" (UniqueName: \"kubernetes.io/projected/0d44a404-e01d-4a19-a103-58cbbafbdc7b-kube-api-access-zbp4d\") on node \"crc\" DevicePath \"\"" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.685934 4721 scope.go:117] "RemoveContainer" containerID="3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.721988 4721 scope.go:117] "RemoveContainer" containerID="7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.777491 4721 scope.go:117] "RemoveContainer" containerID="cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33" Feb 02 13:34:53 crc kubenswrapper[4721]: E0202 13:34:53.778062 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33\": container with ID starting with cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33 not found: ID does not exist" containerID="cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.778196 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33"} err="failed to get container status \"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33\": rpc error: code = NotFound desc = could not find container \"cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33\": container with ID starting with cf31c21a88a57036453ab7c75fbf9e8abaeb723450dd869a3f193c5095004d33 not found: ID does not exist" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.778226 4721 scope.go:117] "RemoveContainer" containerID="3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c" Feb 02 13:34:53 crc kubenswrapper[4721]: E0202 13:34:53.778727 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c\": container with ID starting with 3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c not found: ID does not exist" containerID="3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.778784 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c"} err="failed to get container status \"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c\": rpc error: code = NotFound desc = could not find container \"3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c\": container with ID starting with 3b43df588e2c12e4ac5c0b8c1354ac40786a7261c60c361f52e9f9fe304fcf3c not found: ID does not exist" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.778812 4721 scope.go:117] "RemoveContainer" containerID="7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b" Feb 02 13:34:53 crc kubenswrapper[4721]: E0202 13:34:53.779128 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b\": container with ID starting with 7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b not found: ID does not exist" containerID="7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.779231 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b"} err="failed to get container status \"7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b\": rpc error: code = NotFound desc = could not find container \"7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b\": container with ID starting with 7c97f99bfe6221d8ea66bdfad4046f68469ef4ad2afed261914d06d6d1c3a74b not found: ID does not exist" Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.959493 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:53 crc kubenswrapper[4721]: I0202 13:34:53.968311 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84s2n"] Feb 02 13:34:54 crc kubenswrapper[4721]: I0202 13:34:54.433196 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" path="/var/lib/kubelet/pods/0d44a404-e01d-4a19-a103-58cbbafbdc7b/volumes" Feb 02 13:35:00 crc kubenswrapper[4721]: I0202 13:35:00.884344 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:35:00 crc kubenswrapper[4721]: I0202 13:35:00.937612 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:35:01 crc kubenswrapper[4721]: I0202 13:35:01.119352 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:35:02 crc kubenswrapper[4721]: I0202 13:35:02.728983 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fmst6" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" containerID="cri-o://74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763" gracePeriod=2 Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.275473 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.394160 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content\") pod \"8972ee68-f222-49f3-8c06-5ba78388a6cd\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.394468 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities\") pod \"8972ee68-f222-49f3-8c06-5ba78388a6cd\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.395020 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phmt\" (UniqueName: \"kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt\") pod \"8972ee68-f222-49f3-8c06-5ba78388a6cd\" (UID: \"8972ee68-f222-49f3-8c06-5ba78388a6cd\") " Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.396527 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities" (OuterVolumeSpecName: "utilities") pod "8972ee68-f222-49f3-8c06-5ba78388a6cd" (UID: "8972ee68-f222-49f3-8c06-5ba78388a6cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.398265 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.411550 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt" (OuterVolumeSpecName: "kube-api-access-8phmt") pod "8972ee68-f222-49f3-8c06-5ba78388a6cd" (UID: "8972ee68-f222-49f3-8c06-5ba78388a6cd"). InnerVolumeSpecName "kube-api-access-8phmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.499891 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phmt\" (UniqueName: \"kubernetes.io/projected/8972ee68-f222-49f3-8c06-5ba78388a6cd-kube-api-access-8phmt\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.543571 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8972ee68-f222-49f3-8c06-5ba78388a6cd" (UID: "8972ee68-f222-49f3-8c06-5ba78388a6cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.603118 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972ee68-f222-49f3-8c06-5ba78388a6cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.752278 4721 generic.go:334] "Generic (PLEG): container finished" podID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerID="74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763" exitCode=0 Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.752325 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerDied","Data":"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763"} Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.752357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmst6" event={"ID":"8972ee68-f222-49f3-8c06-5ba78388a6cd","Type":"ContainerDied","Data":"7d900a7ae12137341c4c93ae85dcbc015a3fb6c748d34ccdb600fe258145bb48"} Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.752376 4721 scope.go:117] "RemoveContainer" containerID="74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.752394 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmst6" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.775978 4721 scope.go:117] "RemoveContainer" containerID="d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.814727 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.835006 4721 scope.go:117] "RemoveContainer" containerID="f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.837355 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fmst6"] Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.879615 4721 scope.go:117] "RemoveContainer" containerID="74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763" Feb 02 13:35:03 crc kubenswrapper[4721]: E0202 13:35:03.880118 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763\": container with ID starting with 74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763 not found: ID does not exist" containerID="74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.880156 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763"} err="failed to get container status \"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763\": rpc error: code = NotFound desc = could not find container \"74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763\": container with ID starting with 74a4c11b6ce3b5a6ca6b3a1f395910e994f318787b772befd6fcc6ba4a326763 not found: ID does not exist" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.880178 4721 scope.go:117] "RemoveContainer" containerID="d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27" Feb 02 13:35:03 crc kubenswrapper[4721]: E0202 13:35:03.880666 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27\": container with ID starting with d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27 not found: ID does not exist" containerID="d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.880720 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27"} err="failed to get container status \"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27\": rpc error: code = NotFound desc = could not find container \"d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27\": container with ID starting with d4ec6deb0c199fe2932aa5da9887fd2837b3431d82684c3e3ecea9b39b062f27 not found: ID does not exist" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.880755 4721 scope.go:117] "RemoveContainer" containerID="f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9" Feb 02 13:35:03 crc kubenswrapper[4721]: E0202 13:35:03.881122 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9\": container with ID starting with f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9 not found: ID does not exist" containerID="f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9" Feb 02 13:35:03 crc kubenswrapper[4721]: I0202 13:35:03.881154 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9"} err="failed to get container status \"f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9\": rpc error: code = NotFound desc = could not find container \"f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9\": container with ID starting with f4f5a2a5cea46a62873fff8e7f1234ae4c404df64b0975cf55881d1c4fbdadb9 not found: ID does not exist" Feb 02 13:35:04 crc kubenswrapper[4721]: I0202 13:35:04.425555 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" path="/var/lib/kubelet/pods/8972ee68-f222-49f3-8c06-5ba78388a6cd/volumes" Feb 02 13:35:12 crc kubenswrapper[4721]: I0202 13:35:12.514527 4721 scope.go:117] "RemoveContainer" containerID="0b784cae8dddc21c2c3af89d409032d5d888340b53c786c1dc27d600d257dd2b" Feb 02 13:35:12 crc kubenswrapper[4721]: I0202 13:35:12.561722 4721 scope.go:117] "RemoveContainer" containerID="2f2b028f4f0c88964c0238ef71f7a14ee0e0d63a6586667e0a8a76c80b585914" Feb 02 13:35:12 crc kubenswrapper[4721]: I0202 13:35:12.625323 4721 scope.go:117] "RemoveContainer" containerID="ce2db44950c758448aaea5320ccdad1fe422fd10d5dc9377dff5887076136a7a" Feb 02 13:35:28 crc kubenswrapper[4721]: I0202 13:35:28.064389 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-22bg7"] Feb 02 13:35:28 crc kubenswrapper[4721]: I0202 13:35:28.078545 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-22bg7"] Feb 02 13:35:28 crc kubenswrapper[4721]: I0202 13:35:28.429504 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d33cb7-8d98-44cc-97ef-229d34805e46" path="/var/lib/kubelet/pods/31d33cb7-8d98-44cc-97ef-229d34805e46/volumes" Feb 02 13:35:29 crc kubenswrapper[4721]: I0202 13:35:29.049610 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zbntf"] Feb 02 13:35:29 crc kubenswrapper[4721]: I0202 13:35:29.102287 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7hfxs"] Feb 02 13:35:29 crc kubenswrapper[4721]: I0202 13:35:29.121134 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zbntf"] Feb 02 13:35:29 crc kubenswrapper[4721]: I0202 13:35:29.134928 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7hfxs"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.031898 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5c37-account-create-update-h9w2m"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.041591 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6b23-account-create-update-5q82h"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.053363 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ecf5-account-create-update-k6kdv"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.062889 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5c37-account-create-update-h9w2m"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.071433 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ecf5-account-create-update-k6kdv"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.081786 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6b23-account-create-update-5q82h"] Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.428707 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011e7b6f-64eb-48b5-be89-8304581d4c5f" path="/var/lib/kubelet/pods/011e7b6f-64eb-48b5-be89-8304581d4c5f/volumes" Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.432173 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd874ae-fdb8-4f98-ae51-dac54a44e001" path="/var/lib/kubelet/pods/0dd874ae-fdb8-4f98-ae51-dac54a44e001/volumes" Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.433506 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147719a3-96ca-4551-a395-648dd45b4ce6" path="/var/lib/kubelet/pods/147719a3-96ca-4551-a395-648dd45b4ce6/volumes" Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.435128 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e297f9-7808-4195-86b2-2c17f4638bf2" path="/var/lib/kubelet/pods/84e297f9-7808-4195-86b2-2c17f4638bf2/volumes" Feb 02 13:35:30 crc kubenswrapper[4721]: I0202 13:35:30.436513 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1" path="/var/lib/kubelet/pods/a5e633c7-bd54-4bcb-9f5c-73d6b059d0e1/volumes" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.019240 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020624 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="extract-utilities" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020658 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="extract-utilities" Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020690 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="extract-content" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020704 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="extract-content" Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020757 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020772 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020817 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020830 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020851 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="extract-content" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020863 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="extract-content" Feb 02 13:35:35 crc kubenswrapper[4721]: E0202 13:35:35.020885 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="extract-utilities" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.020897 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="extract-utilities" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.021348 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8972ee68-f222-49f3-8c06-5ba78388a6cd" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.021390 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d44a404-e01d-4a19-a103-58cbbafbdc7b" containerName="registry-server" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.025892 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.029181 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.177366 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.177791 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.177848 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsd9z\" (UniqueName: \"kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.280803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.281237 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.281274 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.281295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsd9z\" (UniqueName: \"kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.281538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.302803 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsd9z\" (UniqueName: \"kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z\") pod \"redhat-marketplace-gs6c5\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.366781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:35 crc kubenswrapper[4721]: I0202 13:35:35.871003 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:35 crc kubenswrapper[4721]: W0202 13:35:35.875132 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6fd6e8_1134_4152_ba9f_c13b2660d022.slice/crio-792dec176685fa30bf565a9c8042fc5dfa645b2c73f5444ff1b7551c0cbc3ab4 WatchSource:0}: Error finding container 792dec176685fa30bf565a9c8042fc5dfa645b2c73f5444ff1b7551c0cbc3ab4: Status 404 returned error can't find the container with id 792dec176685fa30bf565a9c8042fc5dfa645b2c73f5444ff1b7551c0cbc3ab4 Feb 02 13:35:36 crc kubenswrapper[4721]: I0202 13:35:36.143821 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerStarted","Data":"792dec176685fa30bf565a9c8042fc5dfa645b2c73f5444ff1b7551c0cbc3ab4"} Feb 02 13:35:37 crc kubenswrapper[4721]: I0202 13:35:37.162638 4721 generic.go:334] "Generic (PLEG): container finished" podID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerID="7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce" exitCode=0 Feb 02 13:35:37 crc kubenswrapper[4721]: I0202 13:35:37.162730 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerDied","Data":"7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce"} Feb 02 13:35:38 crc kubenswrapper[4721]: I0202 13:35:38.176525 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerStarted","Data":"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885"} Feb 02 13:35:39 crc kubenswrapper[4721]: I0202 13:35:39.189156 4721 generic.go:334] "Generic (PLEG): container finished" podID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerID="7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885" exitCode=0 Feb 02 13:35:39 crc kubenswrapper[4721]: I0202 13:35:39.189230 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerDied","Data":"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885"} Feb 02 13:35:40 crc kubenswrapper[4721]: I0202 13:35:40.204716 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerStarted","Data":"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf"} Feb 02 13:35:40 crc kubenswrapper[4721]: I0202 13:35:40.240832 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gs6c5" podStartSLOduration=3.7900480500000002 podStartE2EDuration="6.240810497s" podCreationTimestamp="2026-02-02 13:35:34 +0000 UTC" firstStartedPulling="2026-02-02 13:35:37.165900028 +0000 UTC m=+2077.468414427" lastFinishedPulling="2026-02-02 13:35:39.616662445 +0000 UTC m=+2079.919176874" observedRunningTime="2026-02-02 13:35:40.226883357 +0000 UTC m=+2080.529397776" watchObservedRunningTime="2026-02-02 13:35:40.240810497 +0000 UTC m=+2080.543324886" Feb 02 13:35:45 crc kubenswrapper[4721]: I0202 13:35:45.367828 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:45 crc kubenswrapper[4721]: I0202 13:35:45.368399 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:45 crc kubenswrapper[4721]: I0202 13:35:45.428387 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:46 crc kubenswrapper[4721]: I0202 13:35:46.349932 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:46 crc kubenswrapper[4721]: I0202 13:35:46.426428 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.316783 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gs6c5" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="registry-server" containerID="cri-o://26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf" gracePeriod=2 Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.881550 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.973141 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsd9z\" (UniqueName: \"kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z\") pod \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.973234 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content\") pod \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.973389 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities\") pod \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\" (UID: \"fb6fd6e8-1134-4152-ba9f-c13b2660d022\") " Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.974952 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities" (OuterVolumeSpecName: "utilities") pod "fb6fd6e8-1134-4152-ba9f-c13b2660d022" (UID: "fb6fd6e8-1134-4152-ba9f-c13b2660d022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.984061 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z" (OuterVolumeSpecName: "kube-api-access-hsd9z") pod "fb6fd6e8-1134-4152-ba9f-c13b2660d022" (UID: "fb6fd6e8-1134-4152-ba9f-c13b2660d022"). InnerVolumeSpecName "kube-api-access-hsd9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:35:48 crc kubenswrapper[4721]: I0202 13:35:48.997115 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb6fd6e8-1134-4152-ba9f-c13b2660d022" (UID: "fb6fd6e8-1134-4152-ba9f-c13b2660d022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.076733 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsd9z\" (UniqueName: \"kubernetes.io/projected/fb6fd6e8-1134-4152-ba9f-c13b2660d022-kube-api-access-hsd9z\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.076773 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.076787 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6fd6e8-1134-4152-ba9f-c13b2660d022-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.322059 4721 generic.go:334] "Generic (PLEG): container finished" podID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerID="26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf" exitCode=0 Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.322114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerDied","Data":"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf"} Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.322159 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs6c5" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.322413 4721 scope.go:117] "RemoveContainer" containerID="26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.322397 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs6c5" event={"ID":"fb6fd6e8-1134-4152-ba9f-c13b2660d022","Type":"ContainerDied","Data":"792dec176685fa30bf565a9c8042fc5dfa645b2c73f5444ff1b7551c0cbc3ab4"} Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.374686 4721 scope.go:117] "RemoveContainer" containerID="7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.381539 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.397102 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs6c5"] Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.398226 4721 scope.go:117] "RemoveContainer" containerID="7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.463979 4721 scope.go:117] "RemoveContainer" containerID="26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf" Feb 02 13:35:49 crc kubenswrapper[4721]: E0202 13:35:49.464463 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf\": container with ID starting with 26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf not found: ID does not exist" containerID="26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.464509 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf"} err="failed to get container status \"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf\": rpc error: code = NotFound desc = could not find container \"26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf\": container with ID starting with 26276be656c22094ec0120095d3587af0bacd94837cef6adde92a89050e06bdf not found: ID does not exist" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.464535 4721 scope.go:117] "RemoveContainer" containerID="7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885" Feb 02 13:35:49 crc kubenswrapper[4721]: E0202 13:35:49.465100 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885\": container with ID starting with 7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885 not found: ID does not exist" containerID="7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.465139 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885"} err="failed to get container status \"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885\": rpc error: code = NotFound desc = could not find container \"7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885\": container with ID starting with 7445ca06a745cdd7aa336fe3800ae4c4859529f0dbaf6a0d2eebc5dacedf8885 not found: ID does not exist" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.465164 4721 scope.go:117] "RemoveContainer" containerID="7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce" Feb 02 13:35:49 crc kubenswrapper[4721]: E0202 13:35:49.465618 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce\": container with ID starting with 7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce not found: ID does not exist" containerID="7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce" Feb 02 13:35:49 crc kubenswrapper[4721]: I0202 13:35:49.465656 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce"} err="failed to get container status \"7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce\": rpc error: code = NotFound desc = could not find container \"7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce\": container with ID starting with 7da1838c8886caa5f3234f5e18217a2f0dba246c557d9428c436ad49653d6fce not found: ID does not exist" Feb 02 13:35:50 crc kubenswrapper[4721]: I0202 13:35:50.428405 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" path="/var/lib/kubelet/pods/fb6fd6e8-1134-4152-ba9f-c13b2660d022/volumes" Feb 02 13:36:06 crc kubenswrapper[4721]: I0202 13:36:06.045606 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9h27j"] Feb 02 13:36:06 crc kubenswrapper[4721]: I0202 13:36:06.057032 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9h27j"] Feb 02 13:36:06 crc kubenswrapper[4721]: I0202 13:36:06.423344 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01b253a-c7c6-4c9e-a800-a1732ba06f37" path="/var/lib/kubelet/pods/f01b253a-c7c6-4c9e-a800-a1732ba06f37/volumes" Feb 02 13:36:12 crc kubenswrapper[4721]: I0202 13:36:12.838527 4721 scope.go:117] "RemoveContainer" containerID="a5de70229867c13b14cef49c003c51ee5606bf312afc995295088db8627fee73" Feb 02 13:36:12 crc kubenswrapper[4721]: I0202 13:36:12.871452 4721 scope.go:117] "RemoveContainer" containerID="d2ceca0ae565540870f02401237596a428eda4dcf72c0158ff8c7116acbfc486" Feb 02 13:36:12 crc kubenswrapper[4721]: I0202 13:36:12.936857 4721 scope.go:117] "RemoveContainer" containerID="663f048cc96da678155c1a95cb9691e5043c9f03a4259611e0be7f358517c2f1" Feb 02 13:36:12 crc kubenswrapper[4721]: I0202 13:36:12.990115 4721 scope.go:117] "RemoveContainer" containerID="8b1af4c2f51b92e7d8e686eda8c46c518ebd4e4b694ff77617f39b7b376a484a" Feb 02 13:36:13 crc kubenswrapper[4721]: I0202 13:36:13.082610 4721 scope.go:117] "RemoveContainer" containerID="a4df6290a6ff822c9798aad4bb78bddad86f6ee3871a5520d115e6e491f3950e" Feb 02 13:36:13 crc kubenswrapper[4721]: I0202 13:36:13.136266 4721 scope.go:117] "RemoveContainer" containerID="9f52f888eba9daf5e6e283524ff7481c4a05eeb6d1ae52e82e3fc8b08b7473c5" Feb 02 13:36:13 crc kubenswrapper[4721]: I0202 13:36:13.197882 4721 scope.go:117] "RemoveContainer" containerID="4fc7b5047fba1d65d14acd76d647401f0b37510534baf6ea3bf2254dfd744004" Feb 02 13:36:15 crc kubenswrapper[4721]: I0202 13:36:15.075398 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gq5tv"] Feb 02 13:36:15 crc kubenswrapper[4721]: I0202 13:36:15.090601 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gq5tv"] Feb 02 13:36:16 crc kubenswrapper[4721]: I0202 13:36:16.047272 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2c26-account-create-update-2r4tb"] Feb 02 13:36:16 crc kubenswrapper[4721]: I0202 13:36:16.057481 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2c26-account-create-update-2r4tb"] Feb 02 13:36:16 crc kubenswrapper[4721]: I0202 13:36:16.424355 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997707ef-4296-4151-9385-0fbb48b5e317" path="/var/lib/kubelet/pods/997707ef-4296-4151-9385-0fbb48b5e317/volumes" Feb 02 13:36:16 crc kubenswrapper[4721]: I0202 13:36:16.425172 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9" path="/var/lib/kubelet/pods/ad02c2c3-e07d-4ab9-8498-26e3c2bfdfb9/volumes" Feb 02 13:36:30 crc kubenswrapper[4721]: I0202 13:36:30.034789 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4rm59"] Feb 02 13:36:30 crc kubenswrapper[4721]: I0202 13:36:30.044862 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4rm59"] Feb 02 13:36:30 crc kubenswrapper[4721]: I0202 13:36:30.424326 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798dac79-94bd-4655-b409-4b173956cdbf" path="/var/lib/kubelet/pods/798dac79-94bd-4655-b409-4b173956cdbf/volumes" Feb 02 13:36:32 crc kubenswrapper[4721]: I0202 13:36:32.035285 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hclll"] Feb 02 13:36:32 crc kubenswrapper[4721]: I0202 13:36:32.055605 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hclll"] Feb 02 13:36:32 crc kubenswrapper[4721]: I0202 13:36:32.427386 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d38541e-139a-425e-a7bd-f7c484f7266b" path="/var/lib/kubelet/pods/6d38541e-139a-425e-a7bd-f7c484f7266b/volumes" Feb 02 13:36:44 crc kubenswrapper[4721]: I0202 13:36:44.763959 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:36:44 crc kubenswrapper[4721]: I0202 13:36:44.764436 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:37:13 crc kubenswrapper[4721]: I0202 13:37:13.355291 4721 scope.go:117] "RemoveContainer" containerID="986695a877ff6ee1eff46d199a1d9a03005ab3f7a3f83da3d9212653e0413f7c" Feb 02 13:37:13 crc kubenswrapper[4721]: I0202 13:37:13.426108 4721 scope.go:117] "RemoveContainer" containerID="2995f84a212135f6e3822d895963dca122d6d439797b7dcab3e6c3486dd7be70" Feb 02 13:37:13 crc kubenswrapper[4721]: I0202 13:37:13.485141 4721 scope.go:117] "RemoveContainer" containerID="7eef54336f3dd0e3ab875f1a48960393b687a8f714292e0019c4853d4094a4be" Feb 02 13:37:13 crc kubenswrapper[4721]: I0202 13:37:13.545881 4721 scope.go:117] "RemoveContainer" containerID="1857f80c5a621ba0a1377e0d92d2bafe33fbe9b8df8cc9bde107743c0bafe96d" Feb 02 13:37:14 crc kubenswrapper[4721]: I0202 13:37:14.763935 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:37:14 crc kubenswrapper[4721]: I0202 13:37:14.764003 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:37:16 crc kubenswrapper[4721]: I0202 13:37:16.057603 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-crdjt"] Feb 02 13:37:16 crc kubenswrapper[4721]: I0202 13:37:16.071590 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-crdjt"] Feb 02 13:37:16 crc kubenswrapper[4721]: I0202 13:37:16.425546 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27df0911-fe79-4339-a6fe-cf538f97a247" path="/var/lib/kubelet/pods/27df0911-fe79-4339-a6fe-cf538f97a247/volumes" Feb 02 13:37:44 crc kubenswrapper[4721]: I0202 13:37:44.765193 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:37:44 crc kubenswrapper[4721]: I0202 13:37:44.766184 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:37:44 crc kubenswrapper[4721]: I0202 13:37:44.766227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:37:44 crc kubenswrapper[4721]: I0202 13:37:44.767170 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:37:44 crc kubenswrapper[4721]: I0202 13:37:44.767218 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f" gracePeriod=600 Feb 02 13:37:45 crc kubenswrapper[4721]: I0202 13:37:45.773286 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f"} Feb 02 13:37:45 crc kubenswrapper[4721]: I0202 13:37:45.773201 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f" exitCode=0 Feb 02 13:37:45 crc kubenswrapper[4721]: I0202 13:37:45.773778 4721 scope.go:117] "RemoveContainer" containerID="28d0d51ec03a49d1ae8e4dc5ae68b3f445ea6c6231be11219a2f3b614eea299c" Feb 02 13:37:45 crc kubenswrapper[4721]: I0202 13:37:45.773815 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375"} Feb 02 13:38:13 crc kubenswrapper[4721]: I0202 13:38:13.681719 4721 scope.go:117] "RemoveContainer" containerID="38697017d92e58f9ce89dc861401173c6d0237bce2d2e6a9a0bc00c9d2093bfd" Feb 02 13:40:14 crc kubenswrapper[4721]: I0202 13:40:14.763443 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:40:14 crc kubenswrapper[4721]: I0202 13:40:14.765778 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.246254 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:26 crc kubenswrapper[4721]: E0202 13:40:26.247611 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="registry-server" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.247628 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="registry-server" Feb 02 13:40:26 crc kubenswrapper[4721]: E0202 13:40:26.247666 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="extract-utilities" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.247689 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="extract-utilities" Feb 02 13:40:26 crc kubenswrapper[4721]: E0202 13:40:26.247736 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="extract-content" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.247745 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="extract-content" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.248001 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6fd6e8-1134-4152-ba9f-c13b2660d022" containerName="registry-server" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.252345 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.273586 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.406714 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7br\" (UniqueName: \"kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.406788 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.407326 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.510342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.511011 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7br\" (UniqueName: \"kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.511063 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.511837 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.513696 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.558548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7br\" (UniqueName: \"kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br\") pod \"certified-operators-grdzv\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:26 crc kubenswrapper[4721]: I0202 13:40:26.574545 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:27 crc kubenswrapper[4721]: I0202 13:40:27.095059 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:27 crc kubenswrapper[4721]: I0202 13:40:27.640335 4721 generic.go:334] "Generic (PLEG): container finished" podID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerID="eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20" exitCode=0 Feb 02 13:40:27 crc kubenswrapper[4721]: I0202 13:40:27.640400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerDied","Data":"eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20"} Feb 02 13:40:27 crc kubenswrapper[4721]: I0202 13:40:27.640440 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerStarted","Data":"03f6acbdacdbf0004670972985b55556e86b1549a5f9eabecd04a5ea87091b90"} Feb 02 13:40:27 crc kubenswrapper[4721]: I0202 13:40:27.649139 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:40:29 crc kubenswrapper[4721]: I0202 13:40:29.670817 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerStarted","Data":"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084"} Feb 02 13:40:30 crc kubenswrapper[4721]: I0202 13:40:30.683970 4721 generic.go:334] "Generic (PLEG): container finished" podID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerID="e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084" exitCode=0 Feb 02 13:40:30 crc kubenswrapper[4721]: I0202 13:40:30.684413 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerDied","Data":"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084"} Feb 02 13:40:32 crc kubenswrapper[4721]: I0202 13:40:32.705576 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerStarted","Data":"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb"} Feb 02 13:40:32 crc kubenswrapper[4721]: I0202 13:40:32.740764 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-grdzv" podStartSLOduration=2.9267386909999997 podStartE2EDuration="6.740738327s" podCreationTimestamp="2026-02-02 13:40:26 +0000 UTC" firstStartedPulling="2026-02-02 13:40:27.648830715 +0000 UTC m=+2367.951345104" lastFinishedPulling="2026-02-02 13:40:31.462830341 +0000 UTC m=+2371.765344740" observedRunningTime="2026-02-02 13:40:32.737625913 +0000 UTC m=+2373.040140402" watchObservedRunningTime="2026-02-02 13:40:32.740738327 +0000 UTC m=+2373.043252726" Feb 02 13:40:36 crc kubenswrapper[4721]: I0202 13:40:36.574682 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:36 crc kubenswrapper[4721]: I0202 13:40:36.575395 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:36 crc kubenswrapper[4721]: I0202 13:40:36.640560 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:36 crc kubenswrapper[4721]: I0202 13:40:36.815812 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.234112 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.234959 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-grdzv" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="registry-server" containerID="cri-o://278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb" gracePeriod=2 Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.783426 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.795646 4721 generic.go:334] "Generic (PLEG): container finished" podID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerID="278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb" exitCode=0 Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.795683 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerDied","Data":"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb"} Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.795706 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grdzv" event={"ID":"fd185aef-3d37-4ad6-91f6-1470e8c39999","Type":"ContainerDied","Data":"03f6acbdacdbf0004670972985b55556e86b1549a5f9eabecd04a5ea87091b90"} Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.795735 4721 scope.go:117] "RemoveContainer" containerID="278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.795784 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grdzv" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.812460 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content\") pod \"fd185aef-3d37-4ad6-91f6-1470e8c39999\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.813030 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities\") pod \"fd185aef-3d37-4ad6-91f6-1470e8c39999\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.813058 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt7br\" (UniqueName: \"kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br\") pod \"fd185aef-3d37-4ad6-91f6-1470e8c39999\" (UID: \"fd185aef-3d37-4ad6-91f6-1470e8c39999\") " Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.823779 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities" (OuterVolumeSpecName: "utilities") pod "fd185aef-3d37-4ad6-91f6-1470e8c39999" (UID: "fd185aef-3d37-4ad6-91f6-1470e8c39999"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.855597 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br" (OuterVolumeSpecName: "kube-api-access-kt7br") pod "fd185aef-3d37-4ad6-91f6-1470e8c39999" (UID: "fd185aef-3d37-4ad6-91f6-1470e8c39999"). InnerVolumeSpecName "kube-api-access-kt7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.862519 4721 scope.go:117] "RemoveContainer" containerID="e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.900355 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd185aef-3d37-4ad6-91f6-1470e8c39999" (UID: "fd185aef-3d37-4ad6-91f6-1470e8c39999"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.906815 4721 scope.go:117] "RemoveContainer" containerID="eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.918343 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.918383 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd185aef-3d37-4ad6-91f6-1470e8c39999-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.918430 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt7br\" (UniqueName: \"kubernetes.io/projected/fd185aef-3d37-4ad6-91f6-1470e8c39999-kube-api-access-kt7br\") on node \"crc\" DevicePath \"\"" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.958651 4721 scope.go:117] "RemoveContainer" containerID="278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb" Feb 02 13:40:40 crc kubenswrapper[4721]: E0202 13:40:40.959178 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb\": container with ID starting with 278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb not found: ID does not exist" containerID="278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.959266 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb"} err="failed to get container status \"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb\": rpc error: code = NotFound desc = could not find container \"278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb\": container with ID starting with 278550fd60063b49953c40090a90bc342ae6b827a0bf1de59628401a7700c1fb not found: ID does not exist" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.959304 4721 scope.go:117] "RemoveContainer" containerID="e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084" Feb 02 13:40:40 crc kubenswrapper[4721]: E0202 13:40:40.959907 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084\": container with ID starting with e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084 not found: ID does not exist" containerID="e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.959948 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084"} err="failed to get container status \"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084\": rpc error: code = NotFound desc = could not find container \"e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084\": container with ID starting with e0c93ba403bb3249e362943f6bbbde5f04706709dac91f6ae4e753f375727084 not found: ID does not exist" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.959997 4721 scope.go:117] "RemoveContainer" containerID="eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20" Feb 02 13:40:40 crc kubenswrapper[4721]: E0202 13:40:40.960541 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20\": container with ID starting with eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20 not found: ID does not exist" containerID="eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20" Feb 02 13:40:40 crc kubenswrapper[4721]: I0202 13:40:40.960569 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20"} err="failed to get container status \"eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20\": rpc error: code = NotFound desc = could not find container \"eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20\": container with ID starting with eb2c414ab7a095044068fc325ff9cdd4824dcdcf846d8f08971020fc8acf2a20 not found: ID does not exist" Feb 02 13:40:41 crc kubenswrapper[4721]: I0202 13:40:41.137246 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:41 crc kubenswrapper[4721]: I0202 13:40:41.147736 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-grdzv"] Feb 02 13:40:42 crc kubenswrapper[4721]: I0202 13:40:42.427923 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" path="/var/lib/kubelet/pods/fd185aef-3d37-4ad6-91f6-1470e8c39999/volumes" Feb 02 13:40:44 crc kubenswrapper[4721]: I0202 13:40:44.763718 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:40:44 crc kubenswrapper[4721]: I0202 13:40:44.764227 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:41:14 crc kubenswrapper[4721]: I0202 13:41:14.763482 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:41:14 crc kubenswrapper[4721]: I0202 13:41:14.764200 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:41:14 crc kubenswrapper[4721]: I0202 13:41:14.764266 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:41:14 crc kubenswrapper[4721]: I0202 13:41:14.765548 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:41:14 crc kubenswrapper[4721]: I0202 13:41:14.765661 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" gracePeriod=600 Feb 02 13:41:14 crc kubenswrapper[4721]: E0202 13:41:14.911705 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:41:15 crc kubenswrapper[4721]: I0202 13:41:15.262887 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" exitCode=0 Feb 02 13:41:15 crc kubenswrapper[4721]: I0202 13:41:15.262930 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375"} Feb 02 13:41:15 crc kubenswrapper[4721]: I0202 13:41:15.262972 4721 scope.go:117] "RemoveContainer" containerID="8db65d587513bf606777ac46eb9a8dd677cd18af3524da4a78bb404625c5d58f" Feb 02 13:41:15 crc kubenswrapper[4721]: I0202 13:41:15.264205 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:41:15 crc kubenswrapper[4721]: E0202 13:41:15.264941 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:41:26 crc kubenswrapper[4721]: I0202 13:41:26.411403 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:41:26 crc kubenswrapper[4721]: E0202 13:41:26.412585 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:41:37 crc kubenswrapper[4721]: I0202 13:41:37.410438 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:41:37 crc kubenswrapper[4721]: E0202 13:41:37.411337 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:41:51 crc kubenswrapper[4721]: I0202 13:41:51.409815 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:41:51 crc kubenswrapper[4721]: E0202 13:41:51.412322 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:42:04 crc kubenswrapper[4721]: I0202 13:42:04.409983 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:42:04 crc kubenswrapper[4721]: E0202 13:42:04.411018 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:42:18 crc kubenswrapper[4721]: I0202 13:42:18.409259 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:42:18 crc kubenswrapper[4721]: E0202 13:42:18.410012 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:42:29 crc kubenswrapper[4721]: I0202 13:42:29.409406 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:42:29 crc kubenswrapper[4721]: E0202 13:42:29.410301 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:42:40 crc kubenswrapper[4721]: I0202 13:42:40.419089 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:42:40 crc kubenswrapper[4721]: E0202 13:42:40.420044 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:42:51 crc kubenswrapper[4721]: I0202 13:42:51.410479 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:42:51 crc kubenswrapper[4721]: E0202 13:42:51.411170 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:43:02 crc kubenswrapper[4721]: I0202 13:43:02.410294 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:43:02 crc kubenswrapper[4721]: E0202 13:43:02.411221 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:43:13 crc kubenswrapper[4721]: I0202 13:43:13.412021 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:43:13 crc kubenswrapper[4721]: E0202 13:43:13.413382 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:43:24 crc kubenswrapper[4721]: I0202 13:43:24.410559 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:43:24 crc kubenswrapper[4721]: E0202 13:43:24.412002 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:43:39 crc kubenswrapper[4721]: I0202 13:43:39.411192 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:43:39 crc kubenswrapper[4721]: E0202 13:43:39.411815 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:43:53 crc kubenswrapper[4721]: I0202 13:43:53.410830 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:43:53 crc kubenswrapper[4721]: E0202 13:43:53.415590 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:44:08 crc kubenswrapper[4721]: I0202 13:44:08.410952 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:44:08 crc kubenswrapper[4721]: E0202 13:44:08.411982 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:44:23 crc kubenswrapper[4721]: I0202 13:44:23.409818 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:44:23 crc kubenswrapper[4721]: E0202 13:44:23.410725 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.751240 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:26 crc kubenswrapper[4721]: E0202 13:44:26.752645 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="extract-utilities" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.752671 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="extract-utilities" Feb 02 13:44:26 crc kubenswrapper[4721]: E0202 13:44:26.752702 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="registry-server" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.752714 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="registry-server" Feb 02 13:44:26 crc kubenswrapper[4721]: E0202 13:44:26.752751 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="extract-content" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.752761 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="extract-content" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.753206 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd185aef-3d37-4ad6-91f6-1470e8c39999" containerName="registry-server" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.760873 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.775523 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.944147 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.944458 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:26 crc kubenswrapper[4721]: I0202 13:44:26.944602 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbz8s\" (UniqueName: \"kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.047009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbz8s\" (UniqueName: \"kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.047166 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.047191 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.047716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.047804 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.073386 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbz8s\" (UniqueName: \"kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s\") pod \"redhat-operators-vzq62\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.099959 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:27 crc kubenswrapper[4721]: I0202 13:44:27.628296 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:28 crc kubenswrapper[4721]: I0202 13:44:28.592241 4721 generic.go:334] "Generic (PLEG): container finished" podID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerID="6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e" exitCode=0 Feb 02 13:44:28 crc kubenswrapper[4721]: I0202 13:44:28.592283 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerDied","Data":"6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e"} Feb 02 13:44:28 crc kubenswrapper[4721]: I0202 13:44:28.592566 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerStarted","Data":"77b7d66f25e886f7db4f55994fba899900aceb23df2d0536848f46034d8e549e"} Feb 02 13:44:29 crc kubenswrapper[4721]: I0202 13:44:29.607305 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerStarted","Data":"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb"} Feb 02 13:44:34 crc kubenswrapper[4721]: I0202 13:44:34.678047 4721 generic.go:334] "Generic (PLEG): container finished" podID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerID="a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb" exitCode=0 Feb 02 13:44:34 crc kubenswrapper[4721]: I0202 13:44:34.678132 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerDied","Data":"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb"} Feb 02 13:44:35 crc kubenswrapper[4721]: I0202 13:44:35.409613 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:44:35 crc kubenswrapper[4721]: E0202 13:44:35.410448 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:44:35 crc kubenswrapper[4721]: I0202 13:44:35.701491 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerStarted","Data":"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8"} Feb 02 13:44:35 crc kubenswrapper[4721]: I0202 13:44:35.745416 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vzq62" podStartSLOduration=3.245523889 podStartE2EDuration="9.745387898s" podCreationTimestamp="2026-02-02 13:44:26 +0000 UTC" firstStartedPulling="2026-02-02 13:44:28.595258184 +0000 UTC m=+2608.897772573" lastFinishedPulling="2026-02-02 13:44:35.095122173 +0000 UTC m=+2615.397636582" observedRunningTime="2026-02-02 13:44:35.733721103 +0000 UTC m=+2616.036235492" watchObservedRunningTime="2026-02-02 13:44:35.745387898 +0000 UTC m=+2616.047902317" Feb 02 13:44:37 crc kubenswrapper[4721]: I0202 13:44:37.101291 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:37 crc kubenswrapper[4721]: I0202 13:44:37.101703 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:38 crc kubenswrapper[4721]: I0202 13:44:38.175405 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vzq62" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="registry-server" probeResult="failure" output=< Feb 02 13:44:38 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 13:44:38 crc kubenswrapper[4721]: > Feb 02 13:44:46 crc kubenswrapper[4721]: I0202 13:44:46.409903 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:44:46 crc kubenswrapper[4721]: E0202 13:44:46.410835 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:44:47 crc kubenswrapper[4721]: I0202 13:44:47.164955 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:47 crc kubenswrapper[4721]: I0202 13:44:47.258272 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:47 crc kubenswrapper[4721]: I0202 13:44:47.425183 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:48 crc kubenswrapper[4721]: I0202 13:44:48.866288 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vzq62" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="registry-server" containerID="cri-o://09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8" gracePeriod=2 Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.491243 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.650835 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbz8s\" (UniqueName: \"kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s\") pod \"c7ad3a36-c7b7-42f9-a87a-773830064c68\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.651148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content\") pod \"c7ad3a36-c7b7-42f9-a87a-773830064c68\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.651281 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities\") pod \"c7ad3a36-c7b7-42f9-a87a-773830064c68\" (UID: \"c7ad3a36-c7b7-42f9-a87a-773830064c68\") " Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.652026 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities" (OuterVolumeSpecName: "utilities") pod "c7ad3a36-c7b7-42f9-a87a-773830064c68" (UID: "c7ad3a36-c7b7-42f9-a87a-773830064c68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.662966 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s" (OuterVolumeSpecName: "kube-api-access-nbz8s") pod "c7ad3a36-c7b7-42f9-a87a-773830064c68" (UID: "c7ad3a36-c7b7-42f9-a87a-773830064c68"). InnerVolumeSpecName "kube-api-access-nbz8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.754583 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.754619 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbz8s\" (UniqueName: \"kubernetes.io/projected/c7ad3a36-c7b7-42f9-a87a-773830064c68-kube-api-access-nbz8s\") on node \"crc\" DevicePath \"\"" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.784797 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7ad3a36-c7b7-42f9-a87a-773830064c68" (UID: "c7ad3a36-c7b7-42f9-a87a-773830064c68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.857047 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7ad3a36-c7b7-42f9-a87a-773830064c68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.878422 4721 generic.go:334] "Generic (PLEG): container finished" podID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerID="09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8" exitCode=0 Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.878462 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerDied","Data":"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8"} Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.878505 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzq62" event={"ID":"c7ad3a36-c7b7-42f9-a87a-773830064c68","Type":"ContainerDied","Data":"77b7d66f25e886f7db4f55994fba899900aceb23df2d0536848f46034d8e549e"} Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.878527 4721 scope.go:117] "RemoveContainer" containerID="09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.878527 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzq62" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.915593 4721 scope.go:117] "RemoveContainer" containerID="a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb" Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.921489 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.932100 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vzq62"] Feb 02 13:44:49 crc kubenswrapper[4721]: I0202 13:44:49.956502 4721 scope.go:117] "RemoveContainer" containerID="6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.020614 4721 scope.go:117] "RemoveContainer" containerID="09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8" Feb 02 13:44:50 crc kubenswrapper[4721]: E0202 13:44:50.021076 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8\": container with ID starting with 09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8 not found: ID does not exist" containerID="09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.021109 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8"} err="failed to get container status \"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8\": rpc error: code = NotFound desc = could not find container \"09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8\": container with ID starting with 09ad1715d2346f747a6d653972cea32abcdad8091f3d1786f97d4c5c0bdd3ce8 not found: ID does not exist" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.021129 4721 scope.go:117] "RemoveContainer" containerID="a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb" Feb 02 13:44:50 crc kubenswrapper[4721]: E0202 13:44:50.021598 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb\": container with ID starting with a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb not found: ID does not exist" containerID="a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.021672 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb"} err="failed to get container status \"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb\": rpc error: code = NotFound desc = could not find container \"a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb\": container with ID starting with a2fe9cbea688d61acdba1a4b24bb3ac1862820b63ba60d3eb6428d91935c2ceb not found: ID does not exist" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.021717 4721 scope.go:117] "RemoveContainer" containerID="6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e" Feb 02 13:44:50 crc kubenswrapper[4721]: E0202 13:44:50.022429 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e\": container with ID starting with 6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e not found: ID does not exist" containerID="6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.022459 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e"} err="failed to get container status \"6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e\": rpc error: code = NotFound desc = could not find container \"6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e\": container with ID starting with 6ed6ebdce7dc1f0c63d36ffc0347489abd3b475d4ec5370a068a0f2d8d8aed2e not found: ID does not exist" Feb 02 13:44:50 crc kubenswrapper[4721]: I0202 13:44:50.436157 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" path="/var/lib/kubelet/pods/c7ad3a36-c7b7-42f9-a87a-773830064c68/volumes" Feb 02 13:44:59 crc kubenswrapper[4721]: I0202 13:44:59.410720 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:44:59 crc kubenswrapper[4721]: E0202 13:44:59.413989 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.164453 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh"] Feb 02 13:45:00 crc kubenswrapper[4721]: E0202 13:45:00.164940 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="extract-content" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.164957 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="extract-content" Feb 02 13:45:00 crc kubenswrapper[4721]: E0202 13:45:00.164969 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="extract-utilities" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.164975 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="extract-utilities" Feb 02 13:45:00 crc kubenswrapper[4721]: E0202 13:45:00.164998 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.165006 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.165247 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ad3a36-c7b7-42f9-a87a-773830064c68" containerName="registry-server" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.166238 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.170469 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.171994 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.181079 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh"] Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.335387 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.335428 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbw5l\" (UniqueName: \"kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.335956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.437915 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.438037 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.438054 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbw5l\" (UniqueName: \"kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.440818 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.450770 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.460388 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.461154 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbw5l\" (UniqueName: \"kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l\") pod \"collect-profiles-29500665-gnnxh\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.502767 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 13:45:00 crc kubenswrapper[4721]: I0202 13:45:00.510882 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:01 crc kubenswrapper[4721]: I0202 13:45:01.120728 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh"] Feb 02 13:45:01 crc kubenswrapper[4721]: I0202 13:45:01.145186 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" event={"ID":"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0","Type":"ContainerStarted","Data":"205ede0bed5fd25627359c20802c03d6484bd8e30b4d9b9a947691ffeb7d8f21"} Feb 02 13:45:02 crc kubenswrapper[4721]: I0202 13:45:02.155752 4721 generic.go:334] "Generic (PLEG): container finished" podID="0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" containerID="65b28cee5d7aba79bd5fdc801054328d6006f1cbcc896f6fdd692cc1c3bf2690" exitCode=0 Feb 02 13:45:02 crc kubenswrapper[4721]: I0202 13:45:02.155886 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" event={"ID":"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0","Type":"ContainerDied","Data":"65b28cee5d7aba79bd5fdc801054328d6006f1cbcc896f6fdd692cc1c3bf2690"} Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.653767 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.745441 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume\") pod \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.745580 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume\") pod \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.745685 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbw5l\" (UniqueName: \"kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l\") pod \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\" (UID: \"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0\") " Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.746414 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" (UID: "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.751348 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" (UID: "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.751402 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l" (OuterVolumeSpecName: "kube-api-access-lbw5l") pod "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" (UID: "0c1f0398-e18b-44f0-b0a8-21f2de8af4d0"). InnerVolumeSpecName "kube-api-access-lbw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.848594 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.848627 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:03 crc kubenswrapper[4721]: I0202 13:45:03.848638 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbw5l\" (UniqueName: \"kubernetes.io/projected/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0-kube-api-access-lbw5l\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:04 crc kubenswrapper[4721]: I0202 13:45:04.181670 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" event={"ID":"0c1f0398-e18b-44f0-b0a8-21f2de8af4d0","Type":"ContainerDied","Data":"205ede0bed5fd25627359c20802c03d6484bd8e30b4d9b9a947691ffeb7d8f21"} Feb 02 13:45:04 crc kubenswrapper[4721]: I0202 13:45:04.181714 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205ede0bed5fd25627359c20802c03d6484bd8e30b4d9b9a947691ffeb7d8f21" Feb 02 13:45:04 crc kubenswrapper[4721]: I0202 13:45:04.181730 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh" Feb 02 13:45:04 crc kubenswrapper[4721]: I0202 13:45:04.752619 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg"] Feb 02 13:45:04 crc kubenswrapper[4721]: I0202 13:45:04.762169 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500620-rxhcg"] Feb 02 13:45:06 crc kubenswrapper[4721]: I0202 13:45:06.422020 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873a8c0c-9da4-4619-9ebf-7a327eb22b7e" path="/var/lib/kubelet/pods/873a8c0c-9da4-4619-9ebf-7a327eb22b7e/volumes" Feb 02 13:45:13 crc kubenswrapper[4721]: I0202 13:45:13.969797 4721 scope.go:117] "RemoveContainer" containerID="e5df09a9819f06ce199926abbd018fc8f1a3ae0cbae66765ccb3d7e1c3a1a81c" Feb 02 13:45:15 crc kubenswrapper[4721]: I0202 13:45:15.410508 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:45:15 crc kubenswrapper[4721]: E0202 13:45:15.411427 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.607633 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:28 crc kubenswrapper[4721]: E0202 13:45:28.608544 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" containerName="collect-profiles" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.608556 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" containerName="collect-profiles" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.608791 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" containerName="collect-profiles" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.611260 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.629621 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.763014 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbw72\" (UniqueName: \"kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.763192 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.763357 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.865726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbw72\" (UniqueName: \"kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.866164 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.866266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.866694 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.866852 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.893129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbw72\" (UniqueName: \"kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72\") pod \"community-operators-9hmgc\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:28 crc kubenswrapper[4721]: I0202 13:45:28.935399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:29 crc kubenswrapper[4721]: I0202 13:45:29.475179 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:30 crc kubenswrapper[4721]: I0202 13:45:30.427980 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:45:30 crc kubenswrapper[4721]: E0202 13:45:30.429292 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:45:30 crc kubenswrapper[4721]: I0202 13:45:30.485454 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerID="0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0" exitCode=0 Feb 02 13:45:30 crc kubenswrapper[4721]: I0202 13:45:30.485506 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerDied","Data":"0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0"} Feb 02 13:45:30 crc kubenswrapper[4721]: I0202 13:45:30.485534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerStarted","Data":"43cd22e32e4c677a4e0ac7ef0068bea7db0f78bbfcb1330f8a9bfd2c11936245"} Feb 02 13:45:30 crc kubenswrapper[4721]: I0202 13:45:30.488870 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:45:32 crc kubenswrapper[4721]: I0202 13:45:32.507702 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerStarted","Data":"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3"} Feb 02 13:45:34 crc kubenswrapper[4721]: I0202 13:45:34.531645 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerID="4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3" exitCode=0 Feb 02 13:45:34 crc kubenswrapper[4721]: I0202 13:45:34.531707 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerDied","Data":"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3"} Feb 02 13:45:35 crc kubenswrapper[4721]: I0202 13:45:35.553444 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerStarted","Data":"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2"} Feb 02 13:45:35 crc kubenswrapper[4721]: I0202 13:45:35.586805 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hmgc" podStartSLOduration=3.059573766 podStartE2EDuration="7.586778682s" podCreationTimestamp="2026-02-02 13:45:28 +0000 UTC" firstStartedPulling="2026-02-02 13:45:30.488550259 +0000 UTC m=+2670.791064648" lastFinishedPulling="2026-02-02 13:45:35.015755175 +0000 UTC m=+2675.318269564" observedRunningTime="2026-02-02 13:45:35.580677317 +0000 UTC m=+2675.883191766" watchObservedRunningTime="2026-02-02 13:45:35.586778682 +0000 UTC m=+2675.889293081" Feb 02 13:45:38 crc kubenswrapper[4721]: I0202 13:45:38.936289 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:38 crc kubenswrapper[4721]: I0202 13:45:38.936822 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:39 crc kubenswrapper[4721]: I0202 13:45:39.005361 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:44 crc kubenswrapper[4721]: I0202 13:45:44.410106 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:45:44 crc kubenswrapper[4721]: E0202 13:45:44.411102 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:45:48 crc kubenswrapper[4721]: I0202 13:45:48.989172 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:49 crc kubenswrapper[4721]: I0202 13:45:49.038956 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:49 crc kubenswrapper[4721]: I0202 13:45:49.686745 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hmgc" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="registry-server" containerID="cri-o://aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2" gracePeriod=2 Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.224248 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.308692 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content\") pod \"5dcec122-4dce-4cdd-ad6e-24defada74b1\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.309049 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbw72\" (UniqueName: \"kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72\") pod \"5dcec122-4dce-4cdd-ad6e-24defada74b1\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.309223 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities\") pod \"5dcec122-4dce-4cdd-ad6e-24defada74b1\" (UID: \"5dcec122-4dce-4cdd-ad6e-24defada74b1\") " Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.310298 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities" (OuterVolumeSpecName: "utilities") pod "5dcec122-4dce-4cdd-ad6e-24defada74b1" (UID: "5dcec122-4dce-4cdd-ad6e-24defada74b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.314940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72" (OuterVolumeSpecName: "kube-api-access-tbw72") pod "5dcec122-4dce-4cdd-ad6e-24defada74b1" (UID: "5dcec122-4dce-4cdd-ad6e-24defada74b1"). InnerVolumeSpecName "kube-api-access-tbw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.364847 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dcec122-4dce-4cdd-ad6e-24defada74b1" (UID: "5dcec122-4dce-4cdd-ad6e-24defada74b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.412359 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.412388 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcec122-4dce-4cdd-ad6e-24defada74b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.412400 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbw72\" (UniqueName: \"kubernetes.io/projected/5dcec122-4dce-4cdd-ad6e-24defada74b1-kube-api-access-tbw72\") on node \"crc\" DevicePath \"\"" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.700655 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hmgc" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.700691 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerDied","Data":"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2"} Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.701248 4721 scope.go:117] "RemoveContainer" containerID="aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.700623 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerID="aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2" exitCode=0 Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.701355 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hmgc" event={"ID":"5dcec122-4dce-4cdd-ad6e-24defada74b1","Type":"ContainerDied","Data":"43cd22e32e4c677a4e0ac7ef0068bea7db0f78bbfcb1330f8a9bfd2c11936245"} Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.730458 4721 scope.go:117] "RemoveContainer" containerID="4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.736937 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.750613 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hmgc"] Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.775634 4721 scope.go:117] "RemoveContainer" containerID="0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.839216 4721 scope.go:117] "RemoveContainer" containerID="aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2" Feb 02 13:45:50 crc kubenswrapper[4721]: E0202 13:45:50.839701 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2\": container with ID starting with aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2 not found: ID does not exist" containerID="aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.839751 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2"} err="failed to get container status \"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2\": rpc error: code = NotFound desc = could not find container \"aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2\": container with ID starting with aaf621df543f587a9ca28561b98c44ff9f550b096b446211468f028715a41de2 not found: ID does not exist" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.839787 4721 scope.go:117] "RemoveContainer" containerID="4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3" Feb 02 13:45:50 crc kubenswrapper[4721]: E0202 13:45:50.840491 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3\": container with ID starting with 4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3 not found: ID does not exist" containerID="4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.840515 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3"} err="failed to get container status \"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3\": rpc error: code = NotFound desc = could not find container \"4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3\": container with ID starting with 4e56addc09f5ed3e298095888ff275b600501427656ee7ae5fa23269c5facfb3 not found: ID does not exist" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.840532 4721 scope.go:117] "RemoveContainer" containerID="0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0" Feb 02 13:45:50 crc kubenswrapper[4721]: E0202 13:45:50.840735 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0\": container with ID starting with 0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0 not found: ID does not exist" containerID="0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0" Feb 02 13:45:50 crc kubenswrapper[4721]: I0202 13:45:50.840754 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0"} err="failed to get container status \"0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0\": rpc error: code = NotFound desc = could not find container \"0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0\": container with ID starting with 0e7cad1ebd7f7a6d05f8327943a512f5f423e3558fe92dd60c9c32c61be8d8d0 not found: ID does not exist" Feb 02 13:45:52 crc kubenswrapper[4721]: I0202 13:45:52.423707 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" path="/var/lib/kubelet/pods/5dcec122-4dce-4cdd-ad6e-24defada74b1/volumes" Feb 02 13:45:57 crc kubenswrapper[4721]: I0202 13:45:57.410519 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:45:57 crc kubenswrapper[4721]: E0202 13:45:57.411385 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:46:08 crc kubenswrapper[4721]: I0202 13:46:08.410116 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:46:08 crc kubenswrapper[4721]: E0202 13:46:08.410845 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:46:20 crc kubenswrapper[4721]: I0202 13:46:20.425336 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:46:21 crc kubenswrapper[4721]: I0202 13:46:21.026272 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb"} Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.056774 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:47:56 crc kubenswrapper[4721]: E0202 13:47:56.057853 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="extract-content" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.057870 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="extract-content" Feb 02 13:47:56 crc kubenswrapper[4721]: E0202 13:47:56.057909 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="registry-server" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.057915 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="registry-server" Feb 02 13:47:56 crc kubenswrapper[4721]: E0202 13:47:56.057935 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="extract-utilities" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.057941 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="extract-utilities" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.058204 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcec122-4dce-4cdd-ad6e-24defada74b1" containerName="registry-server" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.060219 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.088671 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.193839 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.193955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.194235 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gv2r\" (UniqueName: \"kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.296382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gv2r\" (UniqueName: \"kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.296528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.296581 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.297056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.297311 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.315877 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gv2r\" (UniqueName: \"kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r\") pod \"redhat-marketplace-lbt8t\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:56 crc kubenswrapper[4721]: I0202 13:47:56.385803 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:47:57 crc kubenswrapper[4721]: I0202 13:47:56.999919 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:47:57 crc kubenswrapper[4721]: I0202 13:47:57.070687 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerStarted","Data":"6202bf2428b0781b43b75d7377d94c51ff868d6ddd6346141789a203072b4804"} Feb 02 13:47:58 crc kubenswrapper[4721]: I0202 13:47:58.097204 4721 generic.go:334] "Generic (PLEG): container finished" podID="3deafa69-5966-4e1f-980e-425af72acdc0" containerID="2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556" exitCode=0 Feb 02 13:47:58 crc kubenswrapper[4721]: I0202 13:47:58.097380 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerDied","Data":"2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556"} Feb 02 13:48:00 crc kubenswrapper[4721]: I0202 13:48:00.121959 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerStarted","Data":"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea"} Feb 02 13:48:01 crc kubenswrapper[4721]: I0202 13:48:01.137056 4721 generic.go:334] "Generic (PLEG): container finished" podID="3deafa69-5966-4e1f-980e-425af72acdc0" containerID="612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea" exitCode=0 Feb 02 13:48:01 crc kubenswrapper[4721]: I0202 13:48:01.137129 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerDied","Data":"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea"} Feb 02 13:48:02 crc kubenswrapper[4721]: I0202 13:48:02.173234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerStarted","Data":"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051"} Feb 02 13:48:02 crc kubenswrapper[4721]: I0202 13:48:02.199388 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbt8t" podStartSLOduration=2.510048068 podStartE2EDuration="6.199369518s" podCreationTimestamp="2026-02-02 13:47:56 +0000 UTC" firstStartedPulling="2026-02-02 13:47:58.101108325 +0000 UTC m=+2818.403622714" lastFinishedPulling="2026-02-02 13:48:01.790429775 +0000 UTC m=+2822.092944164" observedRunningTime="2026-02-02 13:48:02.194596499 +0000 UTC m=+2822.497110938" watchObservedRunningTime="2026-02-02 13:48:02.199369518 +0000 UTC m=+2822.501883907" Feb 02 13:48:06 crc kubenswrapper[4721]: I0202 13:48:06.386296 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:06 crc kubenswrapper[4721]: I0202 13:48:06.386925 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:06 crc kubenswrapper[4721]: I0202 13:48:06.459429 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:07 crc kubenswrapper[4721]: I0202 13:48:07.297725 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:07 crc kubenswrapper[4721]: I0202 13:48:07.363784 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.253848 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbt8t" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="registry-server" containerID="cri-o://10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051" gracePeriod=2 Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.810365 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.938187 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gv2r\" (UniqueName: \"kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r\") pod \"3deafa69-5966-4e1f-980e-425af72acdc0\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.938287 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content\") pod \"3deafa69-5966-4e1f-980e-425af72acdc0\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.938391 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities\") pod \"3deafa69-5966-4e1f-980e-425af72acdc0\" (UID: \"3deafa69-5966-4e1f-980e-425af72acdc0\") " Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.940541 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities" (OuterVolumeSpecName: "utilities") pod "3deafa69-5966-4e1f-980e-425af72acdc0" (UID: "3deafa69-5966-4e1f-980e-425af72acdc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.945402 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r" (OuterVolumeSpecName: "kube-api-access-5gv2r") pod "3deafa69-5966-4e1f-980e-425af72acdc0" (UID: "3deafa69-5966-4e1f-980e-425af72acdc0"). InnerVolumeSpecName "kube-api-access-5gv2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:48:09 crc kubenswrapper[4721]: I0202 13:48:09.961426 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3deafa69-5966-4e1f-980e-425af72acdc0" (UID: "3deafa69-5966-4e1f-980e-425af72acdc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.041559 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.041617 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gv2r\" (UniqueName: \"kubernetes.io/projected/3deafa69-5966-4e1f-980e-425af72acdc0-kube-api-access-5gv2r\") on node \"crc\" DevicePath \"\"" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.041630 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deafa69-5966-4e1f-980e-425af72acdc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.267386 4721 generic.go:334] "Generic (PLEG): container finished" podID="3deafa69-5966-4e1f-980e-425af72acdc0" containerID="10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051" exitCode=0 Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.267435 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerDied","Data":"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051"} Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.267483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbt8t" event={"ID":"3deafa69-5966-4e1f-980e-425af72acdc0","Type":"ContainerDied","Data":"6202bf2428b0781b43b75d7377d94c51ff868d6ddd6346141789a203072b4804"} Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.267505 4721 scope.go:117] "RemoveContainer" containerID="10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.267523 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbt8t" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.294466 4721 scope.go:117] "RemoveContainer" containerID="612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.316232 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.325132 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbt8t"] Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.338843 4721 scope.go:117] "RemoveContainer" containerID="2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.397110 4721 scope.go:117] "RemoveContainer" containerID="10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051" Feb 02 13:48:10 crc kubenswrapper[4721]: E0202 13:48:10.397773 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051\": container with ID starting with 10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051 not found: ID does not exist" containerID="10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.397826 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051"} err="failed to get container status \"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051\": rpc error: code = NotFound desc = could not find container \"10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051\": container with ID starting with 10a37277718872dfb34a51982c52a791fb47d31f2ee8f2faf4afd7c82e341051 not found: ID does not exist" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.397859 4721 scope.go:117] "RemoveContainer" containerID="612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea" Feb 02 13:48:10 crc kubenswrapper[4721]: E0202 13:48:10.398396 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea\": container with ID starting with 612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea not found: ID does not exist" containerID="612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.398435 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea"} err="failed to get container status \"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea\": rpc error: code = NotFound desc = could not find container \"612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea\": container with ID starting with 612953a20e032543a014347153fd200146d412d83ed9394e0b870a25fbc7d9ea not found: ID does not exist" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.398453 4721 scope.go:117] "RemoveContainer" containerID="2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556" Feb 02 13:48:10 crc kubenswrapper[4721]: E0202 13:48:10.398790 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556\": container with ID starting with 2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556 not found: ID does not exist" containerID="2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.398891 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556"} err="failed to get container status \"2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556\": rpc error: code = NotFound desc = could not find container \"2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556\": container with ID starting with 2f3db2b82513b61552bc392d16adcdf1a3fb9403b2f840dcd35c94a8884b9556 not found: ID does not exist" Feb 02 13:48:10 crc kubenswrapper[4721]: I0202 13:48:10.429822 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" path="/var/lib/kubelet/pods/3deafa69-5966-4e1f-980e-425af72acdc0/volumes" Feb 02 13:48:44 crc kubenswrapper[4721]: I0202 13:48:44.763271 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:48:44 crc kubenswrapper[4721]: I0202 13:48:44.763862 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:49:14 crc kubenswrapper[4721]: I0202 13:49:14.763991 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:49:14 crc kubenswrapper[4721]: I0202 13:49:14.764953 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:49:44 crc kubenswrapper[4721]: I0202 13:49:44.763799 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:49:44 crc kubenswrapper[4721]: I0202 13:49:44.764637 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:49:44 crc kubenswrapper[4721]: I0202 13:49:44.764704 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:49:44 crc kubenswrapper[4721]: I0202 13:49:44.766298 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:49:44 crc kubenswrapper[4721]: I0202 13:49:44.766421 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb" gracePeriod=600 Feb 02 13:49:45 crc kubenswrapper[4721]: I0202 13:49:45.364559 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb" exitCode=0 Feb 02 13:49:45 crc kubenswrapper[4721]: I0202 13:49:45.364645 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb"} Feb 02 13:49:45 crc kubenswrapper[4721]: I0202 13:49:45.365000 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95"} Feb 02 13:49:45 crc kubenswrapper[4721]: I0202 13:49:45.365030 4721 scope.go:117] "RemoveContainer" containerID="d8958dd1d812f7b9a664fedaaa541ab06354a91335826977890ec60b381f7375" Feb 02 13:52:14 crc kubenswrapper[4721]: I0202 13:52:14.765017 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:52:14 crc kubenswrapper[4721]: I0202 13:52:14.765546 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:52:44 crc kubenswrapper[4721]: I0202 13:52:44.763442 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:52:44 crc kubenswrapper[4721]: I0202 13:52:44.764102 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:53:14 crc kubenswrapper[4721]: I0202 13:53:14.763214 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 13:53:14 crc kubenswrapper[4721]: I0202 13:53:14.763709 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 13:53:14 crc kubenswrapper[4721]: I0202 13:53:14.763752 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 13:53:14 crc kubenswrapper[4721]: I0202 13:53:14.764721 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 13:53:14 crc kubenswrapper[4721]: I0202 13:53:14.764769 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" gracePeriod=600 Feb 02 13:53:14 crc kubenswrapper[4721]: E0202 13:53:14.888185 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:53:15 crc kubenswrapper[4721]: I0202 13:53:15.261273 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" exitCode=0 Feb 02 13:53:15 crc kubenswrapper[4721]: I0202 13:53:15.261344 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95"} Feb 02 13:53:15 crc kubenswrapper[4721]: I0202 13:53:15.261420 4721 scope.go:117] "RemoveContainer" containerID="8408071901f7ef7e9399e59808de53d109544919537a0777d33359b2080a1dcb" Feb 02 13:53:15 crc kubenswrapper[4721]: I0202 13:53:15.262018 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:53:15 crc kubenswrapper[4721]: E0202 13:53:15.262451 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:53:28 crc kubenswrapper[4721]: I0202 13:53:28.409881 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:53:28 crc kubenswrapper[4721]: E0202 13:53:28.410650 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:53:40 crc kubenswrapper[4721]: I0202 13:53:40.419164 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:53:40 crc kubenswrapper[4721]: E0202 13:53:40.420255 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:53:51 crc kubenswrapper[4721]: I0202 13:53:51.409665 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:53:51 crc kubenswrapper[4721]: E0202 13:53:51.410570 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:54:05 crc kubenswrapper[4721]: I0202 13:54:05.427781 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:54:05 crc kubenswrapper[4721]: E0202 13:54:05.428673 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:54:20 crc kubenswrapper[4721]: I0202 13:54:20.423860 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:54:20 crc kubenswrapper[4721]: E0202 13:54:20.424946 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:54:32 crc kubenswrapper[4721]: I0202 13:54:32.417320 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:54:32 crc kubenswrapper[4721]: E0202 13:54:32.420240 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:54:46 crc kubenswrapper[4721]: I0202 13:54:46.410595 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:54:46 crc kubenswrapper[4721]: E0202 13:54:46.411866 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:54:57 crc kubenswrapper[4721]: I0202 13:54:57.409997 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:54:57 crc kubenswrapper[4721]: E0202 13:54:57.410977 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:55:12 crc kubenswrapper[4721]: I0202 13:55:12.409558 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:55:12 crc kubenswrapper[4721]: E0202 13:55:12.410438 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:55:23 crc kubenswrapper[4721]: I0202 13:55:23.409856 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:55:23 crc kubenswrapper[4721]: E0202 13:55:23.410724 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:55:34 crc kubenswrapper[4721]: I0202 13:55:34.413466 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:55:34 crc kubenswrapper[4721]: E0202 13:55:34.415449 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:55:45 crc kubenswrapper[4721]: I0202 13:55:45.409394 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:55:45 crc kubenswrapper[4721]: E0202 13:55:45.409976 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:56:00 crc kubenswrapper[4721]: I0202 13:56:00.420717 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:56:00 crc kubenswrapper[4721]: E0202 13:56:00.421644 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:56:14 crc kubenswrapper[4721]: I0202 13:56:14.411617 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:56:14 crc kubenswrapper[4721]: E0202 13:56:14.413723 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:56:25 crc kubenswrapper[4721]: I0202 13:56:25.410531 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:56:25 crc kubenswrapper[4721]: E0202 13:56:25.412680 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.825862 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:31 crc kubenswrapper[4721]: E0202 13:56:31.827191 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="extract-utilities" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.827208 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="extract-utilities" Feb 02 13:56:31 crc kubenswrapper[4721]: E0202 13:56:31.827218 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="registry-server" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.827227 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="registry-server" Feb 02 13:56:31 crc kubenswrapper[4721]: E0202 13:56:31.827257 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="extract-content" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.827266 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="extract-content" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.827859 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3deafa69-5966-4e1f-980e-425af72acdc0" containerName="registry-server" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.830595 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.841189 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.944869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flg89\" (UniqueName: \"kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.945123 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:31 crc kubenswrapper[4721]: I0202 13:56:31.945459 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.048764 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flg89\" (UniqueName: \"kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.048879 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.048970 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.049412 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.049515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.070595 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flg89\" (UniqueName: \"kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89\") pod \"community-operators-pxrt4\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.159475 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:32 crc kubenswrapper[4721]: I0202 13:56:32.723447 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:33 crc kubenswrapper[4721]: I0202 13:56:33.092750 4721 generic.go:334] "Generic (PLEG): container finished" podID="6d185637-80f0-4145-b618-e2f865c63eae" containerID="6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5" exitCode=0 Feb 02 13:56:33 crc kubenswrapper[4721]: I0202 13:56:33.092799 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerDied","Data":"6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5"} Feb 02 13:56:33 crc kubenswrapper[4721]: I0202 13:56:33.092830 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerStarted","Data":"d1497bf13ca863f87cd8aabfa72fe1a83cd44e4483f83f21ded2bf8ec4bd89d0"} Feb 02 13:56:33 crc kubenswrapper[4721]: I0202 13:56:33.095190 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 13:56:34 crc kubenswrapper[4721]: I0202 13:56:34.104044 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerStarted","Data":"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6"} Feb 02 13:56:35 crc kubenswrapper[4721]: I0202 13:56:35.126462 4721 generic.go:334] "Generic (PLEG): container finished" podID="6d185637-80f0-4145-b618-e2f865c63eae" containerID="a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6" exitCode=0 Feb 02 13:56:35 crc kubenswrapper[4721]: I0202 13:56:35.126534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerDied","Data":"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6"} Feb 02 13:56:38 crc kubenswrapper[4721]: I0202 13:56:38.161435 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerStarted","Data":"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965"} Feb 02 13:56:38 crc kubenswrapper[4721]: I0202 13:56:38.193697 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxrt4" podStartSLOduration=2.635322265 podStartE2EDuration="7.193680555s" podCreationTimestamp="2026-02-02 13:56:31 +0000 UTC" firstStartedPulling="2026-02-02 13:56:33.094703135 +0000 UTC m=+3333.397217544" lastFinishedPulling="2026-02-02 13:56:37.653061445 +0000 UTC m=+3337.955575834" observedRunningTime="2026-02-02 13:56:38.187664444 +0000 UTC m=+3338.490178833" watchObservedRunningTime="2026-02-02 13:56:38.193680555 +0000 UTC m=+3338.496194934" Feb 02 13:56:38 crc kubenswrapper[4721]: I0202 13:56:38.410129 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:56:38 crc kubenswrapper[4721]: E0202 13:56:38.410384 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:56:42 crc kubenswrapper[4721]: I0202 13:56:42.160176 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:42 crc kubenswrapper[4721]: I0202 13:56:42.160851 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:42 crc kubenswrapper[4721]: I0202 13:56:42.212984 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:42 crc kubenswrapper[4721]: I0202 13:56:42.268549 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:42 crc kubenswrapper[4721]: I0202 13:56:42.462278 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.229207 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxrt4" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="registry-server" containerID="cri-o://259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965" gracePeriod=2 Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.789365 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.878429 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flg89\" (UniqueName: \"kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89\") pod \"6d185637-80f0-4145-b618-e2f865c63eae\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.878616 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content\") pod \"6d185637-80f0-4145-b618-e2f865c63eae\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.878716 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities\") pod \"6d185637-80f0-4145-b618-e2f865c63eae\" (UID: \"6d185637-80f0-4145-b618-e2f865c63eae\") " Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.879674 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities" (OuterVolumeSpecName: "utilities") pod "6d185637-80f0-4145-b618-e2f865c63eae" (UID: "6d185637-80f0-4145-b618-e2f865c63eae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.880311 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.897361 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89" (OuterVolumeSpecName: "kube-api-access-flg89") pod "6d185637-80f0-4145-b618-e2f865c63eae" (UID: "6d185637-80f0-4145-b618-e2f865c63eae"). InnerVolumeSpecName "kube-api-access-flg89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.938245 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d185637-80f0-4145-b618-e2f865c63eae" (UID: "6d185637-80f0-4145-b618-e2f865c63eae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.981926 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flg89\" (UniqueName: \"kubernetes.io/projected/6d185637-80f0-4145-b618-e2f865c63eae-kube-api-access-flg89\") on node \"crc\" DevicePath \"\"" Feb 02 13:56:44 crc kubenswrapper[4721]: I0202 13:56:44.981966 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d185637-80f0-4145-b618-e2f865c63eae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.242892 4721 generic.go:334] "Generic (PLEG): container finished" podID="6d185637-80f0-4145-b618-e2f865c63eae" containerID="259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965" exitCode=0 Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.242941 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxrt4" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.242957 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerDied","Data":"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965"} Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.243026 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxrt4" event={"ID":"6d185637-80f0-4145-b618-e2f865c63eae","Type":"ContainerDied","Data":"d1497bf13ca863f87cd8aabfa72fe1a83cd44e4483f83f21ded2bf8ec4bd89d0"} Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.243049 4721 scope.go:117] "RemoveContainer" containerID="259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.280645 4721 scope.go:117] "RemoveContainer" containerID="a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.295037 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.308629 4721 scope.go:117] "RemoveContainer" containerID="6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.312633 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxrt4"] Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.367597 4721 scope.go:117] "RemoveContainer" containerID="259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965" Feb 02 13:56:45 crc kubenswrapper[4721]: E0202 13:56:45.368724 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965\": container with ID starting with 259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965 not found: ID does not exist" containerID="259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.368763 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965"} err="failed to get container status \"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965\": rpc error: code = NotFound desc = could not find container \"259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965\": container with ID starting with 259c1463334a3bb187b54da633918809d1751ad5502acfa3850c7207c533b965 not found: ID does not exist" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.368789 4721 scope.go:117] "RemoveContainer" containerID="a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6" Feb 02 13:56:45 crc kubenswrapper[4721]: E0202 13:56:45.369130 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6\": container with ID starting with a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6 not found: ID does not exist" containerID="a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.369174 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6"} err="failed to get container status \"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6\": rpc error: code = NotFound desc = could not find container \"a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6\": container with ID starting with a926d42f9c28e7a97b179531d05ec7b7398048e5d75852e89a0a92e2638525d6 not found: ID does not exist" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.369192 4721 scope.go:117] "RemoveContainer" containerID="6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5" Feb 02 13:56:45 crc kubenswrapper[4721]: E0202 13:56:45.369604 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5\": container with ID starting with 6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5 not found: ID does not exist" containerID="6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5" Feb 02 13:56:45 crc kubenswrapper[4721]: I0202 13:56:45.369700 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5"} err="failed to get container status \"6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5\": rpc error: code = NotFound desc = could not find container \"6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5\": container with ID starting with 6c5b85e99338fca40087a9f82e2ea6718a39fe8628f17dc05efb962529fa27f5 not found: ID does not exist" Feb 02 13:56:46 crc kubenswrapper[4721]: I0202 13:56:46.423805 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d185637-80f0-4145-b618-e2f865c63eae" path="/var/lib/kubelet/pods/6d185637-80f0-4145-b618-e2f865c63eae/volumes" Feb 02 13:56:50 crc kubenswrapper[4721]: I0202 13:56:50.416473 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:56:50 crc kubenswrapper[4721]: E0202 13:56:50.417314 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:02 crc kubenswrapper[4721]: I0202 13:57:02.426446 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:57:02 crc kubenswrapper[4721]: E0202 13:57:02.427304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:13 crc kubenswrapper[4721]: I0202 13:57:13.409788 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:57:13 crc kubenswrapper[4721]: E0202 13:57:13.410506 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.365463 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:24 crc kubenswrapper[4721]: E0202 13:57:24.366604 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="registry-server" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.366623 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="registry-server" Feb 02 13:57:24 crc kubenswrapper[4721]: E0202 13:57:24.366636 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="extract-utilities" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.366645 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="extract-utilities" Feb 02 13:57:24 crc kubenswrapper[4721]: E0202 13:57:24.366685 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="extract-content" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.366693 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="extract-content" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.366999 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d185637-80f0-4145-b618-e2f865c63eae" containerName="registry-server" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.369249 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.376735 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.424928 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5hzs\" (UniqueName: \"kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.425189 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.425690 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.527861 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.528052 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.528257 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5hzs\" (UniqueName: \"kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.528381 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.528541 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.546729 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5hzs\" (UniqueName: \"kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs\") pod \"certified-operators-l2rfj\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:24 crc kubenswrapper[4721]: I0202 13:57:24.701112 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:25 crc kubenswrapper[4721]: I0202 13:57:25.302396 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:25 crc kubenswrapper[4721]: I0202 13:57:25.652947 4721 generic.go:334] "Generic (PLEG): container finished" podID="ce3793f3-fe06-4956-84c0-3811f6449960" containerID="416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85" exitCode=0 Feb 02 13:57:25 crc kubenswrapper[4721]: I0202 13:57:25.653052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerDied","Data":"416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85"} Feb 02 13:57:25 crc kubenswrapper[4721]: I0202 13:57:25.653293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerStarted","Data":"814efe839a1fecc5914e839d354f8a1443d99c6e663998cefc39d21b28109536"} Feb 02 13:57:27 crc kubenswrapper[4721]: I0202 13:57:27.410414 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:57:27 crc kubenswrapper[4721]: E0202 13:57:27.410706 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:27 crc kubenswrapper[4721]: I0202 13:57:27.961342 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6jkv"] Feb 02 13:57:27 crc kubenswrapper[4721]: I0202 13:57:27.964558 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:27 crc kubenswrapper[4721]: I0202 13:57:27.979241 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6jkv"] Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.052109 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-utilities\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.052277 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk845\" (UniqueName: \"kubernetes.io/projected/7625a6ea-aff2-4a16-a62a-fec198126d2f-kube-api-access-kk845\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.052448 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-catalog-content\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.156527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk845\" (UniqueName: \"kubernetes.io/projected/7625a6ea-aff2-4a16-a62a-fec198126d2f-kube-api-access-kk845\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.157370 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-catalog-content\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.157668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-utilities\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.157842 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-catalog-content\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.158198 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7625a6ea-aff2-4a16-a62a-fec198126d2f-utilities\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.175741 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk845\" (UniqueName: \"kubernetes.io/projected/7625a6ea-aff2-4a16-a62a-fec198126d2f-kube-api-access-kk845\") pod \"redhat-operators-q6jkv\" (UID: \"7625a6ea-aff2-4a16-a62a-fec198126d2f\") " pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.286816 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.681265 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerStarted","Data":"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114"} Feb 02 13:57:28 crc kubenswrapper[4721]: W0202 13:57:28.788810 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7625a6ea_aff2_4a16_a62a_fec198126d2f.slice/crio-444e09281b872c51787242c9be7b8fdce41a8b5549c444fde0bd545c400d7ef0 WatchSource:0}: Error finding container 444e09281b872c51787242c9be7b8fdce41a8b5549c444fde0bd545c400d7ef0: Status 404 returned error can't find the container with id 444e09281b872c51787242c9be7b8fdce41a8b5549c444fde0bd545c400d7ef0 Feb 02 13:57:28 crc kubenswrapper[4721]: I0202 13:57:28.793364 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6jkv"] Feb 02 13:57:29 crc kubenswrapper[4721]: I0202 13:57:29.691855 4721 generic.go:334] "Generic (PLEG): container finished" podID="7625a6ea-aff2-4a16-a62a-fec198126d2f" containerID="c01d56064c94befb707a3cfc3128fb239069c1ed7f645a9ef02cd7d25a3e6539" exitCode=0 Feb 02 13:57:29 crc kubenswrapper[4721]: I0202 13:57:29.691993 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jkv" event={"ID":"7625a6ea-aff2-4a16-a62a-fec198126d2f","Type":"ContainerDied","Data":"c01d56064c94befb707a3cfc3128fb239069c1ed7f645a9ef02cd7d25a3e6539"} Feb 02 13:57:29 crc kubenswrapper[4721]: I0202 13:57:29.692483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jkv" event={"ID":"7625a6ea-aff2-4a16-a62a-fec198126d2f","Type":"ContainerStarted","Data":"444e09281b872c51787242c9be7b8fdce41a8b5549c444fde0bd545c400d7ef0"} Feb 02 13:57:32 crc kubenswrapper[4721]: I0202 13:57:32.725377 4721 generic.go:334] "Generic (PLEG): container finished" podID="ce3793f3-fe06-4956-84c0-3811f6449960" containerID="20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114" exitCode=0 Feb 02 13:57:32 crc kubenswrapper[4721]: I0202 13:57:32.726238 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerDied","Data":"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114"} Feb 02 13:57:40 crc kubenswrapper[4721]: I0202 13:57:40.417572 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:57:40 crc kubenswrapper[4721]: E0202 13:57:40.418323 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:44 crc kubenswrapper[4721]: E0202 13:57:44.287471 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 13:57:44 crc kubenswrapper[4721]: E0202 13:57:44.288505 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk845,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q6jkv_openshift-marketplace(7625a6ea-aff2-4a16-a62a-fec198126d2f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 13:57:44 crc kubenswrapper[4721]: E0202 13:57:44.290137 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q6jkv" podUID="7625a6ea-aff2-4a16-a62a-fec198126d2f" Feb 02 13:57:44 crc kubenswrapper[4721]: I0202 13:57:44.856601 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerStarted","Data":"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9"} Feb 02 13:57:44 crc kubenswrapper[4721]: E0202 13:57:44.859936 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-q6jkv" podUID="7625a6ea-aff2-4a16-a62a-fec198126d2f" Feb 02 13:57:44 crc kubenswrapper[4721]: I0202 13:57:44.905793 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2rfj" podStartSLOduration=3.310618251 podStartE2EDuration="20.905773249s" podCreationTimestamp="2026-02-02 13:57:24 +0000 UTC" firstStartedPulling="2026-02-02 13:57:26.664098939 +0000 UTC m=+3386.966613328" lastFinishedPulling="2026-02-02 13:57:44.259253937 +0000 UTC m=+3404.561768326" observedRunningTime="2026-02-02 13:57:44.899732936 +0000 UTC m=+3405.202247345" watchObservedRunningTime="2026-02-02 13:57:44.905773249 +0000 UTC m=+3405.208287638" Feb 02 13:57:54 crc kubenswrapper[4721]: I0202 13:57:54.409909 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:57:54 crc kubenswrapper[4721]: E0202 13:57:54.410791 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:57:54 crc kubenswrapper[4721]: I0202 13:57:54.701757 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:54 crc kubenswrapper[4721]: I0202 13:57:54.701869 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:54 crc kubenswrapper[4721]: I0202 13:57:54.755042 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:55 crc kubenswrapper[4721]: I0202 13:57:55.012585 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:55 crc kubenswrapper[4721]: I0202 13:57:55.571225 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:56 crc kubenswrapper[4721]: I0202 13:57:56.982591 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l2rfj" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="registry-server" containerID="cri-o://eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9" gracePeriod=2 Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.542771 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.691666 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content\") pod \"ce3793f3-fe06-4956-84c0-3811f6449960\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.691727 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities\") pod \"ce3793f3-fe06-4956-84c0-3811f6449960\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.691840 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5hzs\" (UniqueName: \"kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs\") pod \"ce3793f3-fe06-4956-84c0-3811f6449960\" (UID: \"ce3793f3-fe06-4956-84c0-3811f6449960\") " Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.692548 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities" (OuterVolumeSpecName: "utilities") pod "ce3793f3-fe06-4956-84c0-3811f6449960" (UID: "ce3793f3-fe06-4956-84c0-3811f6449960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.693030 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.700452 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs" (OuterVolumeSpecName: "kube-api-access-g5hzs") pod "ce3793f3-fe06-4956-84c0-3811f6449960" (UID: "ce3793f3-fe06-4956-84c0-3811f6449960"). InnerVolumeSpecName "kube-api-access-g5hzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.745782 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce3793f3-fe06-4956-84c0-3811f6449960" (UID: "ce3793f3-fe06-4956-84c0-3811f6449960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.795565 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3793f3-fe06-4956-84c0-3811f6449960-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.795619 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5hzs\" (UniqueName: \"kubernetes.io/projected/ce3793f3-fe06-4956-84c0-3811f6449960-kube-api-access-g5hzs\") on node \"crc\" DevicePath \"\"" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.995750 4721 generic.go:334] "Generic (PLEG): container finished" podID="ce3793f3-fe06-4956-84c0-3811f6449960" containerID="eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9" exitCode=0 Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.995807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerDied","Data":"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9"} Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.995837 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2rfj" Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.995859 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2rfj" event={"ID":"ce3793f3-fe06-4956-84c0-3811f6449960","Type":"ContainerDied","Data":"814efe839a1fecc5914e839d354f8a1443d99c6e663998cefc39d21b28109536"} Feb 02 13:57:57 crc kubenswrapper[4721]: I0202 13:57:57.995879 4721 scope.go:117] "RemoveContainer" containerID="eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.017135 4721 scope.go:117] "RemoveContainer" containerID="20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.043315 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.048972 4721 scope.go:117] "RemoveContainer" containerID="416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.055600 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l2rfj"] Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.109951 4721 scope.go:117] "RemoveContainer" containerID="eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9" Feb 02 13:57:58 crc kubenswrapper[4721]: E0202 13:57:58.110493 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9\": container with ID starting with eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9 not found: ID does not exist" containerID="eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.110526 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9"} err="failed to get container status \"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9\": rpc error: code = NotFound desc = could not find container \"eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9\": container with ID starting with eda9a3095650a4417a1d00a1fd3927affd2a6123e2a22b48b305282f96bf93a9 not found: ID does not exist" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.110545 4721 scope.go:117] "RemoveContainer" containerID="20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114" Feb 02 13:57:58 crc kubenswrapper[4721]: E0202 13:57:58.111226 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114\": container with ID starting with 20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114 not found: ID does not exist" containerID="20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.111270 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114"} err="failed to get container status \"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114\": rpc error: code = NotFound desc = could not find container \"20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114\": container with ID starting with 20d8b245a9d340220dba41ff04541a95c6f99a8d0ac94cc7eec398ec30cf7114 not found: ID does not exist" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.111324 4721 scope.go:117] "RemoveContainer" containerID="416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85" Feb 02 13:57:58 crc kubenswrapper[4721]: E0202 13:57:58.111618 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85\": container with ID starting with 416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85 not found: ID does not exist" containerID="416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.111654 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85"} err="failed to get container status \"416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85\": rpc error: code = NotFound desc = could not find container \"416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85\": container with ID starting with 416ec678adbc5a8a9292d17b0a54b193e2d634d29af6fe00334987f93e56cf85 not found: ID does not exist" Feb 02 13:57:58 crc kubenswrapper[4721]: I0202 13:57:58.423698 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" path="/var/lib/kubelet/pods/ce3793f3-fe06-4956-84c0-3811f6449960/volumes" Feb 02 13:58:00 crc kubenswrapper[4721]: I0202 13:58:00.019103 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jkv" event={"ID":"7625a6ea-aff2-4a16-a62a-fec198126d2f","Type":"ContainerStarted","Data":"09a1c914b9db7ff8c4ed2aab89cc37a8e547ffbab3c86e51891ec9356164aa85"} Feb 02 13:58:01 crc kubenswrapper[4721]: I0202 13:58:01.030336 4721 generic.go:334] "Generic (PLEG): container finished" podID="7625a6ea-aff2-4a16-a62a-fec198126d2f" containerID="09a1c914b9db7ff8c4ed2aab89cc37a8e547ffbab3c86e51891ec9356164aa85" exitCode=0 Feb 02 13:58:01 crc kubenswrapper[4721]: I0202 13:58:01.030442 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jkv" event={"ID":"7625a6ea-aff2-4a16-a62a-fec198126d2f","Type":"ContainerDied","Data":"09a1c914b9db7ff8c4ed2aab89cc37a8e547ffbab3c86e51891ec9356164aa85"} Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.049895 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6jkv" event={"ID":"7625a6ea-aff2-4a16-a62a-fec198126d2f","Type":"ContainerStarted","Data":"558cb9db418799365f6b95799667f971c21531f838a4c6731ce4bfc1255d7a15"} Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.070315 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6jkv" podStartSLOduration=3.984900403 podStartE2EDuration="36.07029726s" podCreationTimestamp="2026-02-02 13:57:27 +0000 UTC" firstStartedPulling="2026-02-02 13:57:29.694087672 +0000 UTC m=+3389.996602061" lastFinishedPulling="2026-02-02 13:58:01.779484529 +0000 UTC m=+3422.081998918" observedRunningTime="2026-02-02 13:58:03.069478598 +0000 UTC m=+3423.371993017" watchObservedRunningTime="2026-02-02 13:58:03.07029726 +0000 UTC m=+3423.372811659" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.786683 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:03 crc kubenswrapper[4721]: E0202 13:58:03.787305 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="extract-content" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.787331 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="extract-content" Feb 02 13:58:03 crc kubenswrapper[4721]: E0202 13:58:03.787364 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="registry-server" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.787373 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="registry-server" Feb 02 13:58:03 crc kubenswrapper[4721]: E0202 13:58:03.787384 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="extract-utilities" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.787396 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="extract-utilities" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.787737 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3793f3-fe06-4956-84c0-3811f6449960" containerName="registry-server" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.790217 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.806091 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.947794 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb4ss\" (UniqueName: \"kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.947889 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:03 crc kubenswrapper[4721]: I0202 13:58:03.947954 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.050774 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb4ss\" (UniqueName: \"kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.050858 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.050907 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.051605 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.052245 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.073133 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb4ss\" (UniqueName: \"kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss\") pod \"redhat-marketplace-gzhg4\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.235134 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:04 crc kubenswrapper[4721]: I0202 13:58:04.737716 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:05 crc kubenswrapper[4721]: I0202 13:58:05.073432 4721 generic.go:334] "Generic (PLEG): container finished" podID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerID="40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37" exitCode=0 Feb 02 13:58:05 crc kubenswrapper[4721]: I0202 13:58:05.073467 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerDied","Data":"40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37"} Feb 02 13:58:05 crc kubenswrapper[4721]: I0202 13:58:05.073488 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerStarted","Data":"e939bbbba90ed00e2c720add47adf1da47f625f58a29dd833ce0584c2aeb5487"} Feb 02 13:58:07 crc kubenswrapper[4721]: I0202 13:58:07.094667 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerStarted","Data":"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4"} Feb 02 13:58:08 crc kubenswrapper[4721]: I0202 13:58:08.286989 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:58:08 crc kubenswrapper[4721]: I0202 13:58:08.287345 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:58:08 crc kubenswrapper[4721]: I0202 13:58:08.332169 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:58:08 crc kubenswrapper[4721]: I0202 13:58:08.410026 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:58:08 crc kubenswrapper[4721]: E0202 13:58:08.410320 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 13:58:09 crc kubenswrapper[4721]: I0202 13:58:09.191700 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6jkv" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:09.999743 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6jkv"] Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.141511 4721 generic.go:334] "Generic (PLEG): container finished" podID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerID="5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4" exitCode=0 Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.141548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerDied","Data":"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4"} Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.180506 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.180767 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gc4db" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="registry-server" containerID="cri-o://f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a" gracePeriod=2 Feb 02 13:58:10 crc kubenswrapper[4721]: E0202 13:58:10.438813 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4d1a7c_52fd_456d_ab0e_78a9c4529fd1.slice/crio-f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a.scope\": RecentStats: unable to find data in memory cache]" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.753733 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.821912 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9grc\" (UniqueName: \"kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc\") pod \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.822003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content\") pod \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.822162 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities\") pod \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\" (UID: \"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1\") " Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.822770 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities" (OuterVolumeSpecName: "utilities") pod "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" (UID: "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.823022 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.835336 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc" (OuterVolumeSpecName: "kube-api-access-f9grc") pod "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" (UID: "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1"). InnerVolumeSpecName "kube-api-access-f9grc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.924987 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9grc\" (UniqueName: \"kubernetes.io/projected/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-kube-api-access-f9grc\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:10 crc kubenswrapper[4721]: I0202 13:58:10.953258 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" (UID: "3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.029257 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.159840 4721 generic.go:334] "Generic (PLEG): container finished" podID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerID="f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a" exitCode=0 Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.159924 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerDied","Data":"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a"} Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.159957 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc4db" event={"ID":"3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1","Type":"ContainerDied","Data":"c443084e3e1b254cd6cca1fcfdbe64c90be36a3dfa2b67b63b03d3820015e610"} Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.159977 4721 scope.go:117] "RemoveContainer" containerID="f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.160280 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc4db" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.168684 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerStarted","Data":"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea"} Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.201830 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gzhg4" podStartSLOduration=2.71937431 podStartE2EDuration="8.201809495s" podCreationTimestamp="2026-02-02 13:58:03 +0000 UTC" firstStartedPulling="2026-02-02 13:58:05.076497433 +0000 UTC m=+3425.379011822" lastFinishedPulling="2026-02-02 13:58:10.558932608 +0000 UTC m=+3430.861447007" observedRunningTime="2026-02-02 13:58:11.192752722 +0000 UTC m=+3431.495267111" watchObservedRunningTime="2026-02-02 13:58:11.201809495 +0000 UTC m=+3431.504323874" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.206111 4721 scope.go:117] "RemoveContainer" containerID="cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.231229 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.249900 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gc4db"] Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.259285 4721 scope.go:117] "RemoveContainer" containerID="b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.318445 4721 scope.go:117] "RemoveContainer" containerID="f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a" Feb 02 13:58:11 crc kubenswrapper[4721]: E0202 13:58:11.318874 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a\": container with ID starting with f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a not found: ID does not exist" containerID="f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.318905 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a"} err="failed to get container status \"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a\": rpc error: code = NotFound desc = could not find container \"f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a\": container with ID starting with f83c45e1ff378ffe8b7f795fb252604530ddd6b5c22fbafb2acc28ce75d5835a not found: ID does not exist" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.318934 4721 scope.go:117] "RemoveContainer" containerID="cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6" Feb 02 13:58:11 crc kubenswrapper[4721]: E0202 13:58:11.319225 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6\": container with ID starting with cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6 not found: ID does not exist" containerID="cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.319256 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6"} err="failed to get container status \"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6\": rpc error: code = NotFound desc = could not find container \"cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6\": container with ID starting with cd748e7fb6f12d62b84d0d6332d25370d2d67d418388354bb68e32e6ced809d6 not found: ID does not exist" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.319285 4721 scope.go:117] "RemoveContainer" containerID="b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f" Feb 02 13:58:11 crc kubenswrapper[4721]: E0202 13:58:11.319651 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f\": container with ID starting with b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f not found: ID does not exist" containerID="b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f" Feb 02 13:58:11 crc kubenswrapper[4721]: I0202 13:58:11.319709 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f"} err="failed to get container status \"b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f\": rpc error: code = NotFound desc = could not find container \"b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f\": container with ID starting with b146ac5d525d99b3d9e62ac67dc0abf39c407bd8cadfc21eb11915cc07946f3f not found: ID does not exist" Feb 02 13:58:12 crc kubenswrapper[4721]: I0202 13:58:12.425219 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" path="/var/lib/kubelet/pods/3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1/volumes" Feb 02 13:58:14 crc kubenswrapper[4721]: I0202 13:58:14.235441 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:14 crc kubenswrapper[4721]: I0202 13:58:14.235817 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:14 crc kubenswrapper[4721]: I0202 13:58:14.291275 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:15 crc kubenswrapper[4721]: I0202 13:58:15.267605 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:16 crc kubenswrapper[4721]: I0202 13:58:16.370731 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.231297 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gzhg4" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="registry-server" containerID="cri-o://509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea" gracePeriod=2 Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.799436 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.916542 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities\") pod \"4d9d6046-cd2e-407a-8264-66d3a10338a5\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.916709 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb4ss\" (UniqueName: \"kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss\") pod \"4d9d6046-cd2e-407a-8264-66d3a10338a5\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.916738 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content\") pod \"4d9d6046-cd2e-407a-8264-66d3a10338a5\" (UID: \"4d9d6046-cd2e-407a-8264-66d3a10338a5\") " Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.917364 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities" (OuterVolumeSpecName: "utilities") pod "4d9d6046-cd2e-407a-8264-66d3a10338a5" (UID: "4d9d6046-cd2e-407a-8264-66d3a10338a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.918085 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.924093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss" (OuterVolumeSpecName: "kube-api-access-fb4ss") pod "4d9d6046-cd2e-407a-8264-66d3a10338a5" (UID: "4d9d6046-cd2e-407a-8264-66d3a10338a5"). InnerVolumeSpecName "kube-api-access-fb4ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 13:58:17 crc kubenswrapper[4721]: I0202 13:58:17.942614 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9d6046-cd2e-407a-8264-66d3a10338a5" (UID: "4d9d6046-cd2e-407a-8264-66d3a10338a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.020102 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb4ss\" (UniqueName: \"kubernetes.io/projected/4d9d6046-cd2e-407a-8264-66d3a10338a5-kube-api-access-fb4ss\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.020434 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9d6046-cd2e-407a-8264-66d3a10338a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.245025 4721 generic.go:334] "Generic (PLEG): container finished" podID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerID="509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea" exitCode=0 Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.245161 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerDied","Data":"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea"} Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.245174 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzhg4" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.245192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzhg4" event={"ID":"4d9d6046-cd2e-407a-8264-66d3a10338a5","Type":"ContainerDied","Data":"e939bbbba90ed00e2c720add47adf1da47f625f58a29dd833ce0584c2aeb5487"} Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.245214 4721 scope.go:117] "RemoveContainer" containerID="509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.265643 4721 scope.go:117] "RemoveContainer" containerID="5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.283320 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.294408 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzhg4"] Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.295881 4721 scope.go:117] "RemoveContainer" containerID="40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.358639 4721 scope.go:117] "RemoveContainer" containerID="509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea" Feb 02 13:58:18 crc kubenswrapper[4721]: E0202 13:58:18.359034 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea\": container with ID starting with 509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea not found: ID does not exist" containerID="509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.359088 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea"} err="failed to get container status \"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea\": rpc error: code = NotFound desc = could not find container \"509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea\": container with ID starting with 509d659a81e9649ed16c4124a959aa0f51c137c5e2085821387b15034b60cdea not found: ID does not exist" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.359115 4721 scope.go:117] "RemoveContainer" containerID="5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4" Feb 02 13:58:18 crc kubenswrapper[4721]: E0202 13:58:18.359444 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4\": container with ID starting with 5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4 not found: ID does not exist" containerID="5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.359473 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4"} err="failed to get container status \"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4\": rpc error: code = NotFound desc = could not find container \"5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4\": container with ID starting with 5b58ce4ab5d56c7b60cf36de1bef73a6bb590aab021624224e5cad64c6ef0fc4 not found: ID does not exist" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.359493 4721 scope.go:117] "RemoveContainer" containerID="40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37" Feb 02 13:58:18 crc kubenswrapper[4721]: E0202 13:58:18.359881 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37\": container with ID starting with 40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37 not found: ID does not exist" containerID="40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.359904 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37"} err="failed to get container status \"40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37\": rpc error: code = NotFound desc = could not find container \"40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37\": container with ID starting with 40ca39e7055aabebd0ab710e77cf21f9170c4e29e95631c702dc47ca462e9e37 not found: ID does not exist" Feb 02 13:58:18 crc kubenswrapper[4721]: I0202 13:58:18.423033 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" path="/var/lib/kubelet/pods/4d9d6046-cd2e-407a-8264-66d3a10338a5/volumes" Feb 02 13:58:21 crc kubenswrapper[4721]: I0202 13:58:21.410429 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 13:58:22 crc kubenswrapper[4721]: I0202 13:58:22.283792 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d"} Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.227829 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv"] Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229542 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="extract-utilities" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229566 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="extract-utilities" Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229619 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229629 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229653 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="extract-content" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229662 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="extract-content" Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229678 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229684 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229701 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="extract-content" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229707 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="extract-content" Feb 02 14:00:00 crc kubenswrapper[4721]: E0202 14:00:00.229739 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="extract-utilities" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.229747 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="extract-utilities" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.231151 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4d1a7c-52fd-456d-ab0e-78a9c4529fd1" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.231211 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9d6046-cd2e-407a-8264-66d3a10338a5" containerName="registry-server" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.232516 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.235901 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.236119 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.244160 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv"] Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.409510 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.409614 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbcb4\" (UniqueName: \"kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.409659 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.512293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.512415 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbcb4\" (UniqueName: \"kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.512459 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.515537 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.519655 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.527756 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.533737 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbcb4\" (UniqueName: \"kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4\") pod \"collect-profiles-29500680-8fnmv\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.564261 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:00:00 crc kubenswrapper[4721]: I0202 14:00:00.573351 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:01 crc kubenswrapper[4721]: I0202 14:00:01.067217 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv"] Feb 02 14:00:01 crc kubenswrapper[4721]: I0202 14:00:01.296197 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" event={"ID":"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f","Type":"ContainerStarted","Data":"29f49894455bd7961b6e45a7b07191bfd0b099e4c502956048b722e54f7a2403"} Feb 02 14:00:02 crc kubenswrapper[4721]: I0202 14:00:02.305780 4721 generic.go:334] "Generic (PLEG): container finished" podID="a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" containerID="626dacec11ff7af02b246606cab5fd3871139e6bcc650e26f41f5bc94d95a99b" exitCode=0 Feb 02 14:00:02 crc kubenswrapper[4721]: I0202 14:00:02.305839 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" event={"ID":"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f","Type":"ContainerDied","Data":"626dacec11ff7af02b246606cab5fd3871139e6bcc650e26f41f5bc94d95a99b"} Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.744804 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.897815 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume\") pod \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.898003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbcb4\" (UniqueName: \"kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4\") pod \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.898119 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume\") pod \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\" (UID: \"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f\") " Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.898796 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" (UID: "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.904198 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" (UID: "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:00:03 crc kubenswrapper[4721]: I0202 14:00:03.904777 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4" (OuterVolumeSpecName: "kube-api-access-rbcb4") pod "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" (UID: "a3a359c1-b4b9-4b88-8bc0-da7c97b7225f"). InnerVolumeSpecName "kube-api-access-rbcb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.001856 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.001901 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbcb4\" (UniqueName: \"kubernetes.io/projected/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-kube-api-access-rbcb4\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.001915 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a359c1-b4b9-4b88-8bc0-da7c97b7225f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.328498 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" event={"ID":"a3a359c1-b4b9-4b88-8bc0-da7c97b7225f","Type":"ContainerDied","Data":"29f49894455bd7961b6e45a7b07191bfd0b099e4c502956048b722e54f7a2403"} Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.328536 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f49894455bd7961b6e45a7b07191bfd0b099e4c502956048b722e54f7a2403" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.328598 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500680-8fnmv" Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.870311 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn"] Feb 02 14:00:04 crc kubenswrapper[4721]: I0202 14:00:04.885157 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500635-qjbdn"] Feb 02 14:00:06 crc kubenswrapper[4721]: I0202 14:00:06.435832 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c984cb-b059-4e3f-86f2-8abca8e6942e" path="/var/lib/kubelet/pods/13c984cb-b059-4e3f-86f2-8abca8e6942e/volumes" Feb 02 14:00:14 crc kubenswrapper[4721]: I0202 14:00:14.516452 4721 scope.go:117] "RemoveContainer" containerID="dbd339a45a88197a4052721f67969cee0f84e4e520076f5a19a1a3e14ab9298f" Feb 02 14:00:44 crc kubenswrapper[4721]: I0202 14:00:44.763466 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:00:44 crc kubenswrapper[4721]: I0202 14:00:44.764080 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.444719 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500681-7xs4g"] Feb 02 14:01:00 crc kubenswrapper[4721]: E0202 14:01:00.446703 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" containerName="collect-profiles" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.446787 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" containerName="collect-profiles" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.447092 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a359c1-b4b9-4b88-8bc0-da7c97b7225f" containerName="collect-profiles" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.447910 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500681-7xs4g"] Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.448060 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.568087 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxcl\" (UniqueName: \"kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.568244 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.568377 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.569058 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.671018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxcl\" (UniqueName: \"kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.671138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.671178 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.671291 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.679230 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.679431 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.679772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.691140 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxcl\" (UniqueName: \"kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl\") pod \"keystone-cron-29500681-7xs4g\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:00 crc kubenswrapper[4721]: I0202 14:01:00.773041 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:01 crc kubenswrapper[4721]: I0202 14:01:01.261797 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500681-7xs4g"] Feb 02 14:01:01 crc kubenswrapper[4721]: I0202 14:01:01.388316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-7xs4g" event={"ID":"9b53b618-4727-4a17-a000-ba0ccd1084c1","Type":"ContainerStarted","Data":"06a522d4079d336fac9999ebe269964f7faeeaac86a9831fc3f7d27a0e2e96ef"} Feb 02 14:01:02 crc kubenswrapper[4721]: I0202 14:01:02.402993 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-7xs4g" event={"ID":"9b53b618-4727-4a17-a000-ba0ccd1084c1","Type":"ContainerStarted","Data":"9ab18faa360117f946acf77d5bbb30958f5c311100041690cf4612b97b1423d6"} Feb 02 14:01:02 crc kubenswrapper[4721]: I0202 14:01:02.430297 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500681-7xs4g" podStartSLOduration=2.430276607 podStartE2EDuration="2.430276607s" podCreationTimestamp="2026-02-02 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:01:02.424000997 +0000 UTC m=+3602.726515386" watchObservedRunningTime="2026-02-02 14:01:02.430276607 +0000 UTC m=+3602.732790996" Feb 02 14:01:07 crc kubenswrapper[4721]: I0202 14:01:07.456701 4721 generic.go:334] "Generic (PLEG): container finished" podID="9b53b618-4727-4a17-a000-ba0ccd1084c1" containerID="9ab18faa360117f946acf77d5bbb30958f5c311100041690cf4612b97b1423d6" exitCode=0 Feb 02 14:01:07 crc kubenswrapper[4721]: I0202 14:01:07.457284 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-7xs4g" event={"ID":"9b53b618-4727-4a17-a000-ba0ccd1084c1","Type":"ContainerDied","Data":"9ab18faa360117f946acf77d5bbb30958f5c311100041690cf4612b97b1423d6"} Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.828264 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.916132 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys\") pod \"9b53b618-4727-4a17-a000-ba0ccd1084c1\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.916368 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vxcl\" (UniqueName: \"kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl\") pod \"9b53b618-4727-4a17-a000-ba0ccd1084c1\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.916481 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle\") pod \"9b53b618-4727-4a17-a000-ba0ccd1084c1\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.916545 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data\") pod \"9b53b618-4727-4a17-a000-ba0ccd1084c1\" (UID: \"9b53b618-4727-4a17-a000-ba0ccd1084c1\") " Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.923370 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b53b618-4727-4a17-a000-ba0ccd1084c1" (UID: "9b53b618-4727-4a17-a000-ba0ccd1084c1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.923678 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl" (OuterVolumeSpecName: "kube-api-access-8vxcl") pod "9b53b618-4727-4a17-a000-ba0ccd1084c1" (UID: "9b53b618-4727-4a17-a000-ba0ccd1084c1"). InnerVolumeSpecName "kube-api-access-8vxcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.954298 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b53b618-4727-4a17-a000-ba0ccd1084c1" (UID: "9b53b618-4727-4a17-a000-ba0ccd1084c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:08 crc kubenswrapper[4721]: I0202 14:01:08.987059 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data" (OuterVolumeSpecName: "config-data") pod "9b53b618-4727-4a17-a000-ba0ccd1084c1" (UID: "9b53b618-4727-4a17-a000-ba0ccd1084c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.019976 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vxcl\" (UniqueName: \"kubernetes.io/projected/9b53b618-4727-4a17-a000-ba0ccd1084c1-kube-api-access-8vxcl\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.020015 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.020027 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.020039 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b53b618-4727-4a17-a000-ba0ccd1084c1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.478859 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500681-7xs4g" event={"ID":"9b53b618-4727-4a17-a000-ba0ccd1084c1","Type":"ContainerDied","Data":"06a522d4079d336fac9999ebe269964f7faeeaac86a9831fc3f7d27a0e2e96ef"} Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.478900 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a522d4079d336fac9999ebe269964f7faeeaac86a9831fc3f7d27a0e2e96ef" Feb 02 14:01:09 crc kubenswrapper[4721]: I0202 14:01:09.478933 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500681-7xs4g" Feb 02 14:01:14 crc kubenswrapper[4721]: I0202 14:01:14.763556 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:01:14 crc kubenswrapper[4721]: I0202 14:01:14.765336 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:01:44 crc kubenswrapper[4721]: I0202 14:01:44.763142 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:01:44 crc kubenswrapper[4721]: I0202 14:01:44.763663 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:01:44 crc kubenswrapper[4721]: I0202 14:01:44.763707 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:01:44 crc kubenswrapper[4721]: I0202 14:01:44.764804 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:01:44 crc kubenswrapper[4721]: I0202 14:01:44.764874 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d" gracePeriod=600 Feb 02 14:01:45 crc kubenswrapper[4721]: I0202 14:01:45.908187 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d" exitCode=0 Feb 02 14:01:45 crc kubenswrapper[4721]: I0202 14:01:45.908275 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d"} Feb 02 14:01:45 crc kubenswrapper[4721]: I0202 14:01:45.908835 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3"} Feb 02 14:01:45 crc kubenswrapper[4721]: I0202 14:01:45.908873 4721 scope.go:117] "RemoveContainer" containerID="b7871a57b1ca8a3b9cb67a09becc2a2b69da768974a437742c318057afd45c95" Feb 02 14:04:14 crc kubenswrapper[4721]: I0202 14:04:14.764246 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:04:14 crc kubenswrapper[4721]: I0202 14:04:14.764782 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:04:44 crc kubenswrapper[4721]: I0202 14:04:44.763238 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:04:44 crc kubenswrapper[4721]: I0202 14:04:44.763859 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:05:14 crc kubenswrapper[4721]: I0202 14:05:14.763211 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:05:14 crc kubenswrapper[4721]: I0202 14:05:14.763895 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:05:14 crc kubenswrapper[4721]: I0202 14:05:14.763944 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:05:14 crc kubenswrapper[4721]: I0202 14:05:14.764953 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:05:14 crc kubenswrapper[4721]: I0202 14:05:14.765014 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" gracePeriod=600 Feb 02 14:05:14 crc kubenswrapper[4721]: E0202 14:05:14.913970 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:05:15 crc kubenswrapper[4721]: I0202 14:05:15.116511 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" exitCode=0 Feb 02 14:05:15 crc kubenswrapper[4721]: I0202 14:05:15.116585 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3"} Feb 02 14:05:15 crc kubenswrapper[4721]: I0202 14:05:15.116864 4721 scope.go:117] "RemoveContainer" containerID="a3cd5fb955e420aefedf3293a111222e0af1cbe46d543be8593a16454e1d2d8d" Feb 02 14:05:15 crc kubenswrapper[4721]: I0202 14:05:15.117605 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:05:15 crc kubenswrapper[4721]: E0202 14:05:15.117879 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:05:30 crc kubenswrapper[4721]: I0202 14:05:30.416357 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:05:30 crc kubenswrapper[4721]: E0202 14:05:30.417224 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:05:44 crc kubenswrapper[4721]: I0202 14:05:44.409741 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:05:44 crc kubenswrapper[4721]: E0202 14:05:44.410544 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:05:55 crc kubenswrapper[4721]: E0202 14:05:55.319054 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 02 14:05:57 crc kubenswrapper[4721]: I0202 14:05:57.411735 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:05:57 crc kubenswrapper[4721]: E0202 14:05:57.412407 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:06:10 crc kubenswrapper[4721]: I0202 14:06:10.410308 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:06:10 crc kubenswrapper[4721]: E0202 14:06:10.410923 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:06:25 crc kubenswrapper[4721]: I0202 14:06:25.409905 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:06:25 crc kubenswrapper[4721]: E0202 14:06:25.410818 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:06:38 crc kubenswrapper[4721]: I0202 14:06:38.409913 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:06:38 crc kubenswrapper[4721]: E0202 14:06:38.410940 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:06:50 crc kubenswrapper[4721]: I0202 14:06:50.419354 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:06:50 crc kubenswrapper[4721]: E0202 14:06:50.420239 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:07:04 crc kubenswrapper[4721]: I0202 14:07:04.409771 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:07:04 crc kubenswrapper[4721]: E0202 14:07:04.410767 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:07:14 crc kubenswrapper[4721]: I0202 14:07:14.981386 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:14 crc kubenswrapper[4721]: E0202 14:07:14.985257 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b53b618-4727-4a17-a000-ba0ccd1084c1" containerName="keystone-cron" Feb 02 14:07:14 crc kubenswrapper[4721]: I0202 14:07:14.985295 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b53b618-4727-4a17-a000-ba0ccd1084c1" containerName="keystone-cron" Feb 02 14:07:14 crc kubenswrapper[4721]: I0202 14:07:14.985607 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b53b618-4727-4a17-a000-ba0ccd1084c1" containerName="keystone-cron" Feb 02 14:07:14 crc kubenswrapper[4721]: I0202 14:07:14.987781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.043408 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.056893 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xpt\" (UniqueName: \"kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.057094 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.057129 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.159350 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.159399 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.159568 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xpt\" (UniqueName: \"kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.160282 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.160369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.179030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xpt\" (UniqueName: \"kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt\") pod \"community-operators-s6b8c\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:15 crc kubenswrapper[4721]: I0202 14:07:15.375444 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:16 crc kubenswrapper[4721]: I0202 14:07:16.080733 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:16 crc kubenswrapper[4721]: I0202 14:07:16.368994 4721 generic.go:334] "Generic (PLEG): container finished" podID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerID="a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c" exitCode=0 Feb 02 14:07:16 crc kubenswrapper[4721]: I0202 14:07:16.369164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerDied","Data":"a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c"} Feb 02 14:07:16 crc kubenswrapper[4721]: I0202 14:07:16.369280 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerStarted","Data":"9e948eff55381bb46ecc98162f082b8ff09ac0413fd3e4474bf9ea804dd01688"} Feb 02 14:07:16 crc kubenswrapper[4721]: I0202 14:07:16.372510 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:07:18 crc kubenswrapper[4721]: I0202 14:07:18.395205 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerStarted","Data":"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3"} Feb 02 14:07:19 crc kubenswrapper[4721]: I0202 14:07:19.410622 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:07:19 crc kubenswrapper[4721]: E0202 14:07:19.411355 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:07:19 crc kubenswrapper[4721]: I0202 14:07:19.412704 4721 generic.go:334] "Generic (PLEG): container finished" podID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerID="be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3" exitCode=0 Feb 02 14:07:19 crc kubenswrapper[4721]: I0202 14:07:19.412742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerDied","Data":"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3"} Feb 02 14:07:20 crc kubenswrapper[4721]: I0202 14:07:20.428060 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerStarted","Data":"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf"} Feb 02 14:07:20 crc kubenswrapper[4721]: I0202 14:07:20.459141 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6b8c" podStartSLOduration=2.984388626 podStartE2EDuration="6.459116205s" podCreationTimestamp="2026-02-02 14:07:14 +0000 UTC" firstStartedPulling="2026-02-02 14:07:16.372233797 +0000 UTC m=+3976.674748176" lastFinishedPulling="2026-02-02 14:07:19.846961366 +0000 UTC m=+3980.149475755" observedRunningTime="2026-02-02 14:07:20.446263197 +0000 UTC m=+3980.748777586" watchObservedRunningTime="2026-02-02 14:07:20.459116205 +0000 UTC m=+3980.761630594" Feb 02 14:07:25 crc kubenswrapper[4721]: I0202 14:07:25.375962 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:25 crc kubenswrapper[4721]: I0202 14:07:25.376407 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:25 crc kubenswrapper[4721]: I0202 14:07:25.425789 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:25 crc kubenswrapper[4721]: I0202 14:07:25.532758 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:25 crc kubenswrapper[4721]: I0202 14:07:25.686836 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:27 crc kubenswrapper[4721]: I0202 14:07:27.502463 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6b8c" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="registry-server" containerID="cri-o://13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf" gracePeriod=2 Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.120580 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.266410 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xpt\" (UniqueName: \"kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt\") pod \"132d342c-14e7-4cf5-a57b-ec168398bcd6\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.266631 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content\") pod \"132d342c-14e7-4cf5-a57b-ec168398bcd6\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.266759 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities\") pod \"132d342c-14e7-4cf5-a57b-ec168398bcd6\" (UID: \"132d342c-14e7-4cf5-a57b-ec168398bcd6\") " Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.268289 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities" (OuterVolumeSpecName: "utilities") pod "132d342c-14e7-4cf5-a57b-ec168398bcd6" (UID: "132d342c-14e7-4cf5-a57b-ec168398bcd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.284316 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt" (OuterVolumeSpecName: "kube-api-access-g9xpt") pod "132d342c-14e7-4cf5-a57b-ec168398bcd6" (UID: "132d342c-14e7-4cf5-a57b-ec168398bcd6"). InnerVolumeSpecName "kube-api-access-g9xpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.377046 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xpt\" (UniqueName: \"kubernetes.io/projected/132d342c-14e7-4cf5-a57b-ec168398bcd6-kube-api-access-g9xpt\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.377100 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.378223 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "132d342c-14e7-4cf5-a57b-ec168398bcd6" (UID: "132d342c-14e7-4cf5-a57b-ec168398bcd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.478782 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d342c-14e7-4cf5-a57b-ec168398bcd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.513785 4721 generic.go:334] "Generic (PLEG): container finished" podID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerID="13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf" exitCode=0 Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.513826 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerDied","Data":"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf"} Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.513851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6b8c" event={"ID":"132d342c-14e7-4cf5-a57b-ec168398bcd6","Type":"ContainerDied","Data":"9e948eff55381bb46ecc98162f082b8ff09ac0413fd3e4474bf9ea804dd01688"} Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.513866 4721 scope.go:117] "RemoveContainer" containerID="13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.513987 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6b8c" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.536434 4721 scope.go:117] "RemoveContainer" containerID="be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.548200 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.557692 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6b8c"] Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.561438 4721 scope.go:117] "RemoveContainer" containerID="a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.606507 4721 scope.go:117] "RemoveContainer" containerID="13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf" Feb 02 14:07:28 crc kubenswrapper[4721]: E0202 14:07:28.607011 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf\": container with ID starting with 13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf not found: ID does not exist" containerID="13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.607054 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf"} err="failed to get container status \"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf\": rpc error: code = NotFound desc = could not find container \"13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf\": container with ID starting with 13c7a2c5f7de51349240cfcd6a1e2310a3a001542a18161adbb2b46b9d27bedf not found: ID does not exist" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.607099 4721 scope.go:117] "RemoveContainer" containerID="be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3" Feb 02 14:07:28 crc kubenswrapper[4721]: E0202 14:07:28.607355 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3\": container with ID starting with be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3 not found: ID does not exist" containerID="be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.607380 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3"} err="failed to get container status \"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3\": rpc error: code = NotFound desc = could not find container \"be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3\": container with ID starting with be00503c8dac0091b8b485e8ce6e48268fac83904056717e35c733050f51c4c3 not found: ID does not exist" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.607392 4721 scope.go:117] "RemoveContainer" containerID="a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c" Feb 02 14:07:28 crc kubenswrapper[4721]: E0202 14:07:28.607707 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c\": container with ID starting with a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c not found: ID does not exist" containerID="a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c" Feb 02 14:07:28 crc kubenswrapper[4721]: I0202 14:07:28.607734 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c"} err="failed to get container status \"a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c\": rpc error: code = NotFound desc = could not find container \"a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c\": container with ID starting with a1e1933fddf132b29cc798adc427aff4954d9bb865810605d5b7733c86a3c42c not found: ID does not exist" Feb 02 14:07:30 crc kubenswrapper[4721]: I0202 14:07:30.422624 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" path="/var/lib/kubelet/pods/132d342c-14e7-4cf5-a57b-ec168398bcd6/volumes" Feb 02 14:07:31 crc kubenswrapper[4721]: I0202 14:07:31.410569 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:07:31 crc kubenswrapper[4721]: E0202 14:07:31.410834 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:07:42 crc kubenswrapper[4721]: I0202 14:07:42.410239 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:07:42 crc kubenswrapper[4721]: E0202 14:07:42.411278 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:07:56 crc kubenswrapper[4721]: I0202 14:07:56.409765 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:07:56 crc kubenswrapper[4721]: E0202 14:07:56.410501 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.063884 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:04 crc kubenswrapper[4721]: E0202 14:08:04.065012 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="extract-content" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.065040 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="extract-content" Feb 02 14:08:04 crc kubenswrapper[4721]: E0202 14:08:04.065059 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="extract-utilities" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.065076 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="extract-utilities" Feb 02 14:08:04 crc kubenswrapper[4721]: E0202 14:08:04.065115 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="registry-server" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.065122 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="registry-server" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.065359 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="132d342c-14e7-4cf5-a57b-ec168398bcd6" containerName="registry-server" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.067415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.090680 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.139718 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcgn\" (UniqueName: \"kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.139937 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.140047 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.242831 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.242906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.243053 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcgn\" (UniqueName: \"kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.243338 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.243393 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.263806 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcgn\" (UniqueName: \"kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn\") pod \"redhat-operators-hqfsp\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.399353 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:04 crc kubenswrapper[4721]: I0202 14:08:04.983700 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:05 crc kubenswrapper[4721]: I0202 14:08:05.905493 4721 generic.go:334] "Generic (PLEG): container finished" podID="41332d77-5523-4863-90fd-84ef4bd024dc" containerID="d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579" exitCode=0 Feb 02 14:08:05 crc kubenswrapper[4721]: I0202 14:08:05.905548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerDied","Data":"d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579"} Feb 02 14:08:05 crc kubenswrapper[4721]: I0202 14:08:05.906104 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerStarted","Data":"6c07071d78da47bcb155fe49420a5a01c4cbee1e05c6f701f0e8d4bb5368342f"} Feb 02 14:08:07 crc kubenswrapper[4721]: I0202 14:08:07.928447 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerStarted","Data":"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733"} Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.051282 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.054770 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.075409 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.154611 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.155054 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cd9w\" (UniqueName: \"kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.155283 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.257871 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cd9w\" (UniqueName: \"kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.258334 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.258564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.258882 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.259063 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.277323 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cd9w\" (UniqueName: \"kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w\") pod \"certified-operators-jnl2q\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:08 crc kubenswrapper[4721]: I0202 14:08:08.384541 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:09 crc kubenswrapper[4721]: I0202 14:08:09.063872 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:09 crc kubenswrapper[4721]: W0202 14:08:09.475917 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e27767b_809c_4392_aec0_a3e3d50959fb.slice/crio-a80f37e1964f3e1ad813d5e6eeead9f759f231490559155985da437ef23d10c2 WatchSource:0}: Error finding container a80f37e1964f3e1ad813d5e6eeead9f759f231490559155985da437ef23d10c2: Status 404 returned error can't find the container with id a80f37e1964f3e1ad813d5e6eeead9f759f231490559155985da437ef23d10c2 Feb 02 14:08:09 crc kubenswrapper[4721]: I0202 14:08:09.956788 4721 generic.go:334] "Generic (PLEG): container finished" podID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerID="a4bba1e22fb3a442494acc735e394cf429cffe943b51c57d84263aa5f3758e98" exitCode=0 Feb 02 14:08:09 crc kubenswrapper[4721]: I0202 14:08:09.956860 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerDied","Data":"a4bba1e22fb3a442494acc735e394cf429cffe943b51c57d84263aa5f3758e98"} Feb 02 14:08:09 crc kubenswrapper[4721]: I0202 14:08:09.957099 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerStarted","Data":"a80f37e1964f3e1ad813d5e6eeead9f759f231490559155985da437ef23d10c2"} Feb 02 14:08:10 crc kubenswrapper[4721]: I0202 14:08:10.970035 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerStarted","Data":"b5da3211ab2c81abf51700d6d4107828beceaba7bc40e9a77c6cea71c7975317"} Feb 02 14:08:11 crc kubenswrapper[4721]: I0202 14:08:11.409811 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:08:11 crc kubenswrapper[4721]: E0202 14:08:11.410660 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:08:16 crc kubenswrapper[4721]: I0202 14:08:16.027781 4721 generic.go:334] "Generic (PLEG): container finished" podID="41332d77-5523-4863-90fd-84ef4bd024dc" containerID="90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733" exitCode=0 Feb 02 14:08:16 crc kubenswrapper[4721]: I0202 14:08:16.027877 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerDied","Data":"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733"} Feb 02 14:08:16 crc kubenswrapper[4721]: I0202 14:08:16.031025 4721 generic.go:334] "Generic (PLEG): container finished" podID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerID="b5da3211ab2c81abf51700d6d4107828beceaba7bc40e9a77c6cea71c7975317" exitCode=0 Feb 02 14:08:16 crc kubenswrapper[4721]: I0202 14:08:16.031060 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerDied","Data":"b5da3211ab2c81abf51700d6d4107828beceaba7bc40e9a77c6cea71c7975317"} Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.056958 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerStarted","Data":"e4a5b4f720a64cea1dfa855a40701a8879af0576f1d55c540e45a5059f322778"} Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.062530 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerStarted","Data":"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8"} Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.084088 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jnl2q" podStartSLOduration=2.613649991 podStartE2EDuration="10.084048654s" podCreationTimestamp="2026-02-02 14:08:08 +0000 UTC" firstStartedPulling="2026-02-02 14:08:09.958613835 +0000 UTC m=+4030.261128224" lastFinishedPulling="2026-02-02 14:08:17.429012498 +0000 UTC m=+4037.731526887" observedRunningTime="2026-02-02 14:08:18.073635793 +0000 UTC m=+4038.376150202" watchObservedRunningTime="2026-02-02 14:08:18.084048654 +0000 UTC m=+4038.386563043" Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.102018 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqfsp" podStartSLOduration=2.489914928 podStartE2EDuration="14.101972148s" podCreationTimestamp="2026-02-02 14:08:04 +0000 UTC" firstStartedPulling="2026-02-02 14:08:05.907942795 +0000 UTC m=+4026.210457204" lastFinishedPulling="2026-02-02 14:08:17.520000025 +0000 UTC m=+4037.822514424" observedRunningTime="2026-02-02 14:08:18.09094558 +0000 UTC m=+4038.393459989" watchObservedRunningTime="2026-02-02 14:08:18.101972148 +0000 UTC m=+4038.404486537" Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.385421 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:18 crc kubenswrapper[4721]: I0202 14:08:18.385877 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:19 crc kubenswrapper[4721]: I0202 14:08:19.449315 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jnl2q" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="registry-server" probeResult="failure" output=< Feb 02 14:08:19 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:08:19 crc kubenswrapper[4721]: > Feb 02 14:08:23 crc kubenswrapper[4721]: I0202 14:08:23.410331 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:08:23 crc kubenswrapper[4721]: E0202 14:08:23.411483 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:08:24 crc kubenswrapper[4721]: I0202 14:08:24.400376 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:24 crc kubenswrapper[4721]: I0202 14:08:24.400429 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:25 crc kubenswrapper[4721]: I0202 14:08:25.447591 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqfsp" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" probeResult="failure" output=< Feb 02 14:08:25 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:08:25 crc kubenswrapper[4721]: > Feb 02 14:08:28 crc kubenswrapper[4721]: I0202 14:08:28.451162 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:28 crc kubenswrapper[4721]: I0202 14:08:28.514016 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:28 crc kubenswrapper[4721]: I0202 14:08:28.713096 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:30 crc kubenswrapper[4721]: I0202 14:08:30.177093 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jnl2q" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="registry-server" containerID="cri-o://e4a5b4f720a64cea1dfa855a40701a8879af0576f1d55c540e45a5059f322778" gracePeriod=2 Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.191000 4721 generic.go:334] "Generic (PLEG): container finished" podID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerID="e4a5b4f720a64cea1dfa855a40701a8879af0576f1d55c540e45a5059f322778" exitCode=0 Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.191044 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerDied","Data":"e4a5b4f720a64cea1dfa855a40701a8879af0576f1d55c540e45a5059f322778"} Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.605213 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.654899 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities\") pod \"7e27767b-809c-4392-aec0-a3e3d50959fb\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.655670 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cd9w\" (UniqueName: \"kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w\") pod \"7e27767b-809c-4392-aec0-a3e3d50959fb\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.655778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content\") pod \"7e27767b-809c-4392-aec0-a3e3d50959fb\" (UID: \"7e27767b-809c-4392-aec0-a3e3d50959fb\") " Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.655787 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities" (OuterVolumeSpecName: "utilities") pod "7e27767b-809c-4392-aec0-a3e3d50959fb" (UID: "7e27767b-809c-4392-aec0-a3e3d50959fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.656482 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.661140 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w" (OuterVolumeSpecName: "kube-api-access-5cd9w") pod "7e27767b-809c-4392-aec0-a3e3d50959fb" (UID: "7e27767b-809c-4392-aec0-a3e3d50959fb"). InnerVolumeSpecName "kube-api-access-5cd9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.711123 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e27767b-809c-4392-aec0-a3e3d50959fb" (UID: "7e27767b-809c-4392-aec0-a3e3d50959fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.760249 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cd9w\" (UniqueName: \"kubernetes.io/projected/7e27767b-809c-4392-aec0-a3e3d50959fb-kube-api-access-5cd9w\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:31 crc kubenswrapper[4721]: I0202 14:08:31.760302 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27767b-809c-4392-aec0-a3e3d50959fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.204807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnl2q" event={"ID":"7e27767b-809c-4392-aec0-a3e3d50959fb","Type":"ContainerDied","Data":"a80f37e1964f3e1ad813d5e6eeead9f759f231490559155985da437ef23d10c2"} Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.204866 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnl2q" Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.204881 4721 scope.go:117] "RemoveContainer" containerID="e4a5b4f720a64cea1dfa855a40701a8879af0576f1d55c540e45a5059f322778" Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.251529 4721 scope.go:117] "RemoveContainer" containerID="b5da3211ab2c81abf51700d6d4107828beceaba7bc40e9a77c6cea71c7975317" Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.253024 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.267528 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jnl2q"] Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.322820 4721 scope.go:117] "RemoveContainer" containerID="a4bba1e22fb3a442494acc735e394cf429cffe943b51c57d84263aa5f3758e98" Feb 02 14:08:32 crc kubenswrapper[4721]: I0202 14:08:32.422265 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" path="/var/lib/kubelet/pods/7e27767b-809c-4392-aec0-a3e3d50959fb/volumes" Feb 02 14:08:35 crc kubenswrapper[4721]: I0202 14:08:35.456115 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqfsp" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" probeResult="failure" output=< Feb 02 14:08:35 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:08:35 crc kubenswrapper[4721]: > Feb 02 14:08:36 crc kubenswrapper[4721]: I0202 14:08:36.409989 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:08:36 crc kubenswrapper[4721]: E0202 14:08:36.410595 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:08:45 crc kubenswrapper[4721]: I0202 14:08:45.901638 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqfsp" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" probeResult="failure" output=< Feb 02 14:08:45 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:08:45 crc kubenswrapper[4721]: > Feb 02 14:08:47 crc kubenswrapper[4721]: I0202 14:08:47.410337 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:08:47 crc kubenswrapper[4721]: E0202 14:08:47.410967 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:08:54 crc kubenswrapper[4721]: I0202 14:08:54.451705 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:54 crc kubenswrapper[4721]: I0202 14:08:54.519166 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:54 crc kubenswrapper[4721]: I0202 14:08:54.693434 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:56 crc kubenswrapper[4721]: I0202 14:08:56.442162 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqfsp" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" containerID="cri-o://f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8" gracePeriod=2 Feb 02 14:08:56 crc kubenswrapper[4721]: I0202 14:08:56.947862 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.003027 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcgn\" (UniqueName: \"kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn\") pod \"41332d77-5523-4863-90fd-84ef4bd024dc\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.003404 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content\") pod \"41332d77-5523-4863-90fd-84ef4bd024dc\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.003670 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities\") pod \"41332d77-5523-4863-90fd-84ef4bd024dc\" (UID: \"41332d77-5523-4863-90fd-84ef4bd024dc\") " Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.004589 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities" (OuterVolumeSpecName: "utilities") pod "41332d77-5523-4863-90fd-84ef4bd024dc" (UID: "41332d77-5523-4863-90fd-84ef4bd024dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.004909 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.012271 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn" (OuterVolumeSpecName: "kube-api-access-frcgn") pod "41332d77-5523-4863-90fd-84ef4bd024dc" (UID: "41332d77-5523-4863-90fd-84ef4bd024dc"). InnerVolumeSpecName "kube-api-access-frcgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.107187 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcgn\" (UniqueName: \"kubernetes.io/projected/41332d77-5523-4863-90fd-84ef4bd024dc-kube-api-access-frcgn\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.133481 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41332d77-5523-4863-90fd-84ef4bd024dc" (UID: "41332d77-5523-4863-90fd-84ef4bd024dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.209861 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41332d77-5523-4863-90fd-84ef4bd024dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.454286 4721 generic.go:334] "Generic (PLEG): container finished" podID="41332d77-5523-4863-90fd-84ef4bd024dc" containerID="f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8" exitCode=0 Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.454331 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerDied","Data":"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8"} Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.454369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfsp" event={"ID":"41332d77-5523-4863-90fd-84ef4bd024dc","Type":"ContainerDied","Data":"6c07071d78da47bcb155fe49420a5a01c4cbee1e05c6f701f0e8d4bb5368342f"} Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.454374 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfsp" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.454392 4721 scope.go:117] "RemoveContainer" containerID="f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.481181 4721 scope.go:117] "RemoveContainer" containerID="90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.502617 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.530576 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqfsp"] Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.542338 4721 scope.go:117] "RemoveContainer" containerID="d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.582811 4721 scope.go:117] "RemoveContainer" containerID="f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8" Feb 02 14:08:57 crc kubenswrapper[4721]: E0202 14:08:57.583288 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8\": container with ID starting with f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8 not found: ID does not exist" containerID="f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.583325 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8"} err="failed to get container status \"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8\": rpc error: code = NotFound desc = could not find container \"f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8\": container with ID starting with f942f9a01862da049be32562bbba8ea52e535b48156529cd0268d2469ccc00b8 not found: ID does not exist" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.583348 4721 scope.go:117] "RemoveContainer" containerID="90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733" Feb 02 14:08:57 crc kubenswrapper[4721]: E0202 14:08:57.583806 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733\": container with ID starting with 90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733 not found: ID does not exist" containerID="90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.583854 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733"} err="failed to get container status \"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733\": rpc error: code = NotFound desc = could not find container \"90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733\": container with ID starting with 90ef5823e28ad3753a59c90c22174f8e739b207f7258218e3047dbd87eadc733 not found: ID does not exist" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.583887 4721 scope.go:117] "RemoveContainer" containerID="d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579" Feb 02 14:08:57 crc kubenswrapper[4721]: E0202 14:08:57.584312 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579\": container with ID starting with d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579 not found: ID does not exist" containerID="d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579" Feb 02 14:08:57 crc kubenswrapper[4721]: I0202 14:08:57.584347 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579"} err="failed to get container status \"d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579\": rpc error: code = NotFound desc = could not find container \"d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579\": container with ID starting with d74cc09c503748e4dfd431417e851a6d44e9f0ec667431f7f10124af0c9ae579 not found: ID does not exist" Feb 02 14:08:58 crc kubenswrapper[4721]: I0202 14:08:58.430866 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" path="/var/lib/kubelet/pods/41332d77-5523-4863-90fd-84ef4bd024dc/volumes" Feb 02 14:09:00 crc kubenswrapper[4721]: I0202 14:09:00.416472 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:09:00 crc kubenswrapper[4721]: E0202 14:09:00.417100 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:09:14 crc kubenswrapper[4721]: I0202 14:09:14.411157 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:09:14 crc kubenswrapper[4721]: E0202 14:09:14.413538 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.116714 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118190 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="extract-content" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118216 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="extract-content" Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118261 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="extract-utilities" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118276 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="extract-utilities" Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118298 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118311 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118334 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="extract-content" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118345 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="extract-content" Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118369 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="extract-utilities" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118380 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="extract-utilities" Feb 02 14:09:18 crc kubenswrapper[4721]: E0202 14:09:18.118406 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118419 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118801 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="41332d77-5523-4863-90fd-84ef4bd024dc" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.118830 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e27767b-809c-4392-aec0-a3e3d50959fb" containerName="registry-server" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.121644 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.125350 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.125603 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkxx\" (UniqueName: \"kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.125741 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.126192 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.227510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.227648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkxx\" (UniqueName: \"kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.227718 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.228119 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.228249 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.265452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkxx\" (UniqueName: \"kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx\") pod \"redhat-marketplace-6jl2c\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.455117 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:18 crc kubenswrapper[4721]: I0202 14:09:18.988872 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:19 crc kubenswrapper[4721]: I0202 14:09:19.681254 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerStarted","Data":"46db6615e27a562cfbe68ae9f7728fbd5e8ecdf926c7ecc60b506a4cfc3648cf"} Feb 02 14:09:20 crc kubenswrapper[4721]: I0202 14:09:20.691734 4721 generic.go:334] "Generic (PLEG): container finished" podID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerID="5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e" exitCode=0 Feb 02 14:09:20 crc kubenswrapper[4721]: I0202 14:09:20.692256 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerDied","Data":"5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e"} Feb 02 14:09:21 crc kubenswrapper[4721]: I0202 14:09:21.705856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerStarted","Data":"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d"} Feb 02 14:09:22 crc kubenswrapper[4721]: I0202 14:09:22.726512 4721 generic.go:334] "Generic (PLEG): container finished" podID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerID="130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d" exitCode=0 Feb 02 14:09:22 crc kubenswrapper[4721]: I0202 14:09:22.726588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerDied","Data":"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d"} Feb 02 14:09:23 crc kubenswrapper[4721]: I0202 14:09:23.739292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerStarted","Data":"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41"} Feb 02 14:09:23 crc kubenswrapper[4721]: I0202 14:09:23.783888 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6jl2c" podStartSLOduration=3.375133914 podStartE2EDuration="5.783864062s" podCreationTimestamp="2026-02-02 14:09:18 +0000 UTC" firstStartedPulling="2026-02-02 14:09:20.694568529 +0000 UTC m=+4100.997082918" lastFinishedPulling="2026-02-02 14:09:23.103298687 +0000 UTC m=+4103.405813066" observedRunningTime="2026-02-02 14:09:23.755540647 +0000 UTC m=+4104.058055036" watchObservedRunningTime="2026-02-02 14:09:23.783864062 +0000 UTC m=+4104.086378441" Feb 02 14:09:28 crc kubenswrapper[4721]: I0202 14:09:28.455775 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:28 crc kubenswrapper[4721]: I0202 14:09:28.456175 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:28 crc kubenswrapper[4721]: I0202 14:09:28.513775 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:28 crc kubenswrapper[4721]: I0202 14:09:28.851012 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:28 crc kubenswrapper[4721]: I0202 14:09:28.909131 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:29 crc kubenswrapper[4721]: I0202 14:09:29.409664 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:09:29 crc kubenswrapper[4721]: E0202 14:09:29.410380 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:09:30 crc kubenswrapper[4721]: I0202 14:09:30.819021 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6jl2c" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="registry-server" containerID="cri-o://c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41" gracePeriod=2 Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.396633 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.557218 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content\") pod \"124e4063-2c55-4307-9d6f-8e9f776a994f\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.557559 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities\") pod \"124e4063-2c55-4307-9d6f-8e9f776a994f\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.557622 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xkxx\" (UniqueName: \"kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx\") pod \"124e4063-2c55-4307-9d6f-8e9f776a994f\" (UID: \"124e4063-2c55-4307-9d6f-8e9f776a994f\") " Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.558906 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities" (OuterVolumeSpecName: "utilities") pod "124e4063-2c55-4307-9d6f-8e9f776a994f" (UID: "124e4063-2c55-4307-9d6f-8e9f776a994f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.575375 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx" (OuterVolumeSpecName: "kube-api-access-2xkxx") pod "124e4063-2c55-4307-9d6f-8e9f776a994f" (UID: "124e4063-2c55-4307-9d6f-8e9f776a994f"). InnerVolumeSpecName "kube-api-access-2xkxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.594216 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "124e4063-2c55-4307-9d6f-8e9f776a994f" (UID: "124e4063-2c55-4307-9d6f-8e9f776a994f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.659604 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.659635 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124e4063-2c55-4307-9d6f-8e9f776a994f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.659645 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xkxx\" (UniqueName: \"kubernetes.io/projected/124e4063-2c55-4307-9d6f-8e9f776a994f-kube-api-access-2xkxx\") on node \"crc\" DevicePath \"\"" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.831602 4721 generic.go:334] "Generic (PLEG): container finished" podID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerID="c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41" exitCode=0 Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.831674 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jl2c" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.832161 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerDied","Data":"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41"} Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.835098 4721 scope.go:117] "RemoveContainer" containerID="c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.835018 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jl2c" event={"ID":"124e4063-2c55-4307-9d6f-8e9f776a994f","Type":"ContainerDied","Data":"46db6615e27a562cfbe68ae9f7728fbd5e8ecdf926c7ecc60b506a4cfc3648cf"} Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.858932 4721 scope.go:117] "RemoveContainer" containerID="130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.881108 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.893362 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jl2c"] Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.901220 4721 scope.go:117] "RemoveContainer" containerID="5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.946838 4721 scope.go:117] "RemoveContainer" containerID="c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41" Feb 02 14:09:31 crc kubenswrapper[4721]: E0202 14:09:31.947526 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41\": container with ID starting with c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41 not found: ID does not exist" containerID="c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.947565 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41"} err="failed to get container status \"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41\": rpc error: code = NotFound desc = could not find container \"c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41\": container with ID starting with c44eb14cba4af2787a2a4a196d4a91f350001847fe8980913c05863545f8fe41 not found: ID does not exist" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.947591 4721 scope.go:117] "RemoveContainer" containerID="130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d" Feb 02 14:09:31 crc kubenswrapper[4721]: E0202 14:09:31.948014 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d\": container with ID starting with 130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d not found: ID does not exist" containerID="130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.948151 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d"} err="failed to get container status \"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d\": rpc error: code = NotFound desc = could not find container \"130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d\": container with ID starting with 130a4ac91634cb769f646163be7809f560730b1d14eca045d2fff194f2e1d37d not found: ID does not exist" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.948248 4721 scope.go:117] "RemoveContainer" containerID="5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e" Feb 02 14:09:31 crc kubenswrapper[4721]: E0202 14:09:31.948647 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e\": container with ID starting with 5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e not found: ID does not exist" containerID="5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e" Feb 02 14:09:31 crc kubenswrapper[4721]: I0202 14:09:31.948702 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e"} err="failed to get container status \"5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e\": rpc error: code = NotFound desc = could not find container \"5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e\": container with ID starting with 5610d8b090cbb2b0bfe52ec49beb490a7d7ac634ccbe498d94376068a10d066e not found: ID does not exist" Feb 02 14:09:32 crc kubenswrapper[4721]: I0202 14:09:32.442972 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" path="/var/lib/kubelet/pods/124e4063-2c55-4307-9d6f-8e9f776a994f/volumes" Feb 02 14:09:44 crc kubenswrapper[4721]: I0202 14:09:44.410333 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:09:44 crc kubenswrapper[4721]: E0202 14:09:44.411649 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:09:59 crc kubenswrapper[4721]: I0202 14:09:59.410662 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:09:59 crc kubenswrapper[4721]: E0202 14:09:59.411744 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:10:12 crc kubenswrapper[4721]: I0202 14:10:12.411664 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:10:12 crc kubenswrapper[4721]: E0202 14:10:12.414503 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:10:25 crc kubenswrapper[4721]: I0202 14:10:25.410977 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:10:26 crc kubenswrapper[4721]: I0202 14:10:26.425095 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0"} Feb 02 14:12:44 crc kubenswrapper[4721]: I0202 14:12:44.763549 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:12:44 crc kubenswrapper[4721]: I0202 14:12:44.764184 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:13:14 crc kubenswrapper[4721]: I0202 14:13:14.763646 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:13:14 crc kubenswrapper[4721]: I0202 14:13:14.764283 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:13:44 crc kubenswrapper[4721]: I0202 14:13:44.763343 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:13:44 crc kubenswrapper[4721]: I0202 14:13:44.763947 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:13:44 crc kubenswrapper[4721]: I0202 14:13:44.763999 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:13:44 crc kubenswrapper[4721]: I0202 14:13:44.764999 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:13:44 crc kubenswrapper[4721]: I0202 14:13:44.765067 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0" gracePeriod=600 Feb 02 14:13:45 crc kubenswrapper[4721]: I0202 14:13:45.409958 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0" exitCode=0 Feb 02 14:13:45 crc kubenswrapper[4721]: I0202 14:13:45.409992 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0"} Feb 02 14:13:45 crc kubenswrapper[4721]: I0202 14:13:45.410562 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd"} Feb 02 14:13:45 crc kubenswrapper[4721]: I0202 14:13:45.410587 4721 scope.go:117] "RemoveContainer" containerID="5cb4b1f20ddb60a671585bed29dfb5cdee5b7a5164771c89413e46dd4b8502d3" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.372635 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5"] Feb 02 14:15:00 crc kubenswrapper[4721]: E0202 14:15:00.373588 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="extract-utilities" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.373609 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="extract-utilities" Feb 02 14:15:00 crc kubenswrapper[4721]: E0202 14:15:00.373664 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="extract-content" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.373672 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="extract-content" Feb 02 14:15:00 crc kubenswrapper[4721]: E0202 14:15:00.373691 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="registry-server" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.373700 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="registry-server" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.373958 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e4063-2c55-4307-9d6f-8e9f776a994f" containerName="registry-server" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.374788 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.377003 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.383207 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5"] Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.383265 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.485666 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.486417 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq6h\" (UniqueName: \"kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.487319 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.590607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.590769 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq6h\" (UniqueName: \"kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.590848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.592758 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.599344 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.601871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.610912 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq6h\" (UniqueName: \"kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h\") pod \"collect-profiles-29500695-2dpd5\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.693883 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:15:00 crc kubenswrapper[4721]: I0202 14:15:00.702553 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:01 crc kubenswrapper[4721]: I0202 14:15:01.240295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5"] Feb 02 14:15:02 crc kubenswrapper[4721]: I0202 14:15:02.234094 4721 generic.go:334] "Generic (PLEG): container finished" podID="ea67d838-a947-4428-a7a0-f1a8484eadf1" containerID="166786c8314d83bae6bc053fd1722409d56df69d5b79d468cc598d016734d7d3" exitCode=0 Feb 02 14:15:02 crc kubenswrapper[4721]: I0202 14:15:02.234593 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" event={"ID":"ea67d838-a947-4428-a7a0-f1a8484eadf1","Type":"ContainerDied","Data":"166786c8314d83bae6bc053fd1722409d56df69d5b79d468cc598d016734d7d3"} Feb 02 14:15:02 crc kubenswrapper[4721]: I0202 14:15:02.235628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" event={"ID":"ea67d838-a947-4428-a7a0-f1a8484eadf1","Type":"ContainerStarted","Data":"237e8b3522a50bd3200fd69a2fe9dd9469f97b50e6a3f2e0c242ea8f58c767e7"} Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.641388 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.770239 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq6h\" (UniqueName: \"kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h\") pod \"ea67d838-a947-4428-a7a0-f1a8484eadf1\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.770410 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume\") pod \"ea67d838-a947-4428-a7a0-f1a8484eadf1\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.770623 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume\") pod \"ea67d838-a947-4428-a7a0-f1a8484eadf1\" (UID: \"ea67d838-a947-4428-a7a0-f1a8484eadf1\") " Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.771305 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea67d838-a947-4428-a7a0-f1a8484eadf1" (UID: "ea67d838-a947-4428-a7a0-f1a8484eadf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.778165 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea67d838-a947-4428-a7a0-f1a8484eadf1" (UID: "ea67d838-a947-4428-a7a0-f1a8484eadf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.786552 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h" (OuterVolumeSpecName: "kube-api-access-7rq6h") pod "ea67d838-a947-4428-a7a0-f1a8484eadf1" (UID: "ea67d838-a947-4428-a7a0-f1a8484eadf1"). InnerVolumeSpecName "kube-api-access-7rq6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.873186 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea67d838-a947-4428-a7a0-f1a8484eadf1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.873401 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea67d838-a947-4428-a7a0-f1a8484eadf1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:15:03 crc kubenswrapper[4721]: I0202 14:15:03.873461 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq6h\" (UniqueName: \"kubernetes.io/projected/ea67d838-a947-4428-a7a0-f1a8484eadf1-kube-api-access-7rq6h\") on node \"crc\" DevicePath \"\"" Feb 02 14:15:04 crc kubenswrapper[4721]: I0202 14:15:04.258626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" event={"ID":"ea67d838-a947-4428-a7a0-f1a8484eadf1","Type":"ContainerDied","Data":"237e8b3522a50bd3200fd69a2fe9dd9469f97b50e6a3f2e0c242ea8f58c767e7"} Feb 02 14:15:04 crc kubenswrapper[4721]: I0202 14:15:04.258666 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237e8b3522a50bd3200fd69a2fe9dd9469f97b50e6a3f2e0c242ea8f58c767e7" Feb 02 14:15:04 crc kubenswrapper[4721]: I0202 14:15:04.258689 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500695-2dpd5" Feb 02 14:15:04 crc kubenswrapper[4721]: I0202 14:15:04.724631 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl"] Feb 02 14:15:04 crc kubenswrapper[4721]: I0202 14:15:04.734980 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500650-fclwl"] Feb 02 14:15:06 crc kubenswrapper[4721]: I0202 14:15:06.424355 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19b4436-4c9b-4671-acef-1ba5685cb660" path="/var/lib/kubelet/pods/d19b4436-4c9b-4671-acef-1ba5685cb660/volumes" Feb 02 14:15:14 crc kubenswrapper[4721]: I0202 14:15:14.927340 4721 scope.go:117] "RemoveContainer" containerID="5d8115a3c44a297e5941de9c7ae62ed0d1533603d2bcff7cfc2aadd64924c9b1" Feb 02 14:16:14 crc kubenswrapper[4721]: I0202 14:16:14.763284 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:16:14 crc kubenswrapper[4721]: I0202 14:16:14.764969 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:16:44 crc kubenswrapper[4721]: I0202 14:16:44.763722 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:16:44 crc kubenswrapper[4721]: I0202 14:16:44.764278 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:17:14 crc kubenswrapper[4721]: I0202 14:17:14.763271 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:17:14 crc kubenswrapper[4721]: I0202 14:17:14.763892 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:17:14 crc kubenswrapper[4721]: I0202 14:17:14.763957 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:17:14 crc kubenswrapper[4721]: I0202 14:17:14.765042 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:17:14 crc kubenswrapper[4721]: I0202 14:17:14.765127 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" gracePeriod=600 Feb 02 14:17:14 crc kubenswrapper[4721]: E0202 14:17:14.888353 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:17:15 crc kubenswrapper[4721]: I0202 14:17:15.620887 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" exitCode=0 Feb 02 14:17:15 crc kubenswrapper[4721]: I0202 14:17:15.620952 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd"} Feb 02 14:17:15 crc kubenswrapper[4721]: I0202 14:17:15.621050 4721 scope.go:117] "RemoveContainer" containerID="814c7c146f4891242cef124426d3e01f7b0766cdf2128bc2a21fd29b8f9fdeb0" Feb 02 14:17:15 crc kubenswrapper[4721]: I0202 14:17:15.621771 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:17:15 crc kubenswrapper[4721]: E0202 14:17:15.622503 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:17:26 crc kubenswrapper[4721]: I0202 14:17:26.410510 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:17:26 crc kubenswrapper[4721]: E0202 14:17:26.411476 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:17:41 crc kubenswrapper[4721]: I0202 14:17:41.409951 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:17:41 crc kubenswrapper[4721]: E0202 14:17:41.410738 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:17:56 crc kubenswrapper[4721]: I0202 14:17:56.410492 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:17:56 crc kubenswrapper[4721]: E0202 14:17:56.411445 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:18:10 crc kubenswrapper[4721]: I0202 14:18:10.417635 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:18:10 crc kubenswrapper[4721]: E0202 14:18:10.418470 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.389343 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:17 crc kubenswrapper[4721]: E0202 14:18:17.390610 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea67d838-a947-4428-a7a0-f1a8484eadf1" containerName="collect-profiles" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.390628 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea67d838-a947-4428-a7a0-f1a8484eadf1" containerName="collect-profiles" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.390920 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea67d838-a947-4428-a7a0-f1a8484eadf1" containerName="collect-profiles" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.393426 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.404762 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.479541 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.479921 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.480215 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqf6\" (UniqueName: \"kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.583425 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqf6\" (UniqueName: \"kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.583698 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.583740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.584352 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.584707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:17 crc kubenswrapper[4721]: I0202 14:18:17.869114 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqf6\" (UniqueName: \"kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6\") pod \"certified-operators-xk8wk\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:18 crc kubenswrapper[4721]: I0202 14:18:18.021790 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:18 crc kubenswrapper[4721]: I0202 14:18:18.586543 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:18 crc kubenswrapper[4721]: W0202 14:18:18.596560 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3b23ea8_3d9e_4f03_8331_6ec86d0f4760.slice/crio-1f983f1126763143728d005de533645cffe1911ac60549421dcb0c2cdc87a5dc WatchSource:0}: Error finding container 1f983f1126763143728d005de533645cffe1911ac60549421dcb0c2cdc87a5dc: Status 404 returned error can't find the container with id 1f983f1126763143728d005de533645cffe1911ac60549421dcb0c2cdc87a5dc Feb 02 14:18:19 crc kubenswrapper[4721]: I0202 14:18:19.330966 4721 generic.go:334] "Generic (PLEG): container finished" podID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerID="8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b" exitCode=0 Feb 02 14:18:19 crc kubenswrapper[4721]: I0202 14:18:19.331031 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerDied","Data":"8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b"} Feb 02 14:18:19 crc kubenswrapper[4721]: I0202 14:18:19.331389 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerStarted","Data":"1f983f1126763143728d005de533645cffe1911ac60549421dcb0c2cdc87a5dc"} Feb 02 14:18:19 crc kubenswrapper[4721]: I0202 14:18:19.334111 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:18:21 crc kubenswrapper[4721]: I0202 14:18:21.351781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerStarted","Data":"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb"} Feb 02 14:18:21 crc kubenswrapper[4721]: I0202 14:18:21.409790 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:18:21 crc kubenswrapper[4721]: E0202 14:18:21.410142 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:18:22 crc kubenswrapper[4721]: I0202 14:18:22.384611 4721 generic.go:334] "Generic (PLEG): container finished" podID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerID="e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb" exitCode=0 Feb 02 14:18:22 crc kubenswrapper[4721]: I0202 14:18:22.384961 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerDied","Data":"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb"} Feb 02 14:18:23 crc kubenswrapper[4721]: I0202 14:18:23.397759 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerStarted","Data":"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3"} Feb 02 14:18:23 crc kubenswrapper[4721]: I0202 14:18:23.428394 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xk8wk" podStartSLOduration=2.8785531559999997 podStartE2EDuration="6.428373032s" podCreationTimestamp="2026-02-02 14:18:17 +0000 UTC" firstStartedPulling="2026-02-02 14:18:19.333891049 +0000 UTC m=+4639.636405438" lastFinishedPulling="2026-02-02 14:18:22.883710925 +0000 UTC m=+4643.186225314" observedRunningTime="2026-02-02 14:18:23.419673327 +0000 UTC m=+4643.722187716" watchObservedRunningTime="2026-02-02 14:18:23.428373032 +0000 UTC m=+4643.730887421" Feb 02 14:18:28 crc kubenswrapper[4721]: I0202 14:18:28.022434 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:28 crc kubenswrapper[4721]: I0202 14:18:28.024177 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:28 crc kubenswrapper[4721]: I0202 14:18:28.101744 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:28 crc kubenswrapper[4721]: I0202 14:18:28.529431 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:28 crc kubenswrapper[4721]: I0202 14:18:28.582923 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:30 crc kubenswrapper[4721]: I0202 14:18:30.491706 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xk8wk" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="registry-server" containerID="cri-o://a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3" gracePeriod=2 Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.137758 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.172545 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content\") pod \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.176795 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glqf6\" (UniqueName: \"kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6\") pod \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.177006 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities\") pod \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\" (UID: \"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760\") " Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.180053 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities" (OuterVolumeSpecName: "utilities") pod "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" (UID: "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.197063 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6" (OuterVolumeSpecName: "kube-api-access-glqf6") pod "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" (UID: "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760"). InnerVolumeSpecName "kube-api-access-glqf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.284008 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.284513 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glqf6\" (UniqueName: \"kubernetes.io/projected/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-kube-api-access-glqf6\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.460359 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" (UID: "b3b23ea8-3d9e-4f03-8331-6ec86d0f4760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.488499 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.505145 4721 generic.go:334] "Generic (PLEG): container finished" podID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerID="a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3" exitCode=0 Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.505190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerDied","Data":"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3"} Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.505214 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xk8wk" event={"ID":"b3b23ea8-3d9e-4f03-8331-6ec86d0f4760","Type":"ContainerDied","Data":"1f983f1126763143728d005de533645cffe1911ac60549421dcb0c2cdc87a5dc"} Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.505231 4721 scope.go:117] "RemoveContainer" containerID="a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.505245 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xk8wk" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.535650 4721 scope.go:117] "RemoveContainer" containerID="e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.555631 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.570734 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xk8wk"] Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.572332 4721 scope.go:117] "RemoveContainer" containerID="8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.607031 4721 scope.go:117] "RemoveContainer" containerID="a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3" Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.607402 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3\": container with ID starting with a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3 not found: ID does not exist" containerID="a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.607440 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3"} err="failed to get container status \"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3\": rpc error: code = NotFound desc = could not find container \"a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3\": container with ID starting with a51068e3fef3b36eef11f28b1683a109165a966c637a81017f0707d2a0c459a3 not found: ID does not exist" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.607463 4721 scope.go:117] "RemoveContainer" containerID="e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb" Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.607945 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb\": container with ID starting with e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb not found: ID does not exist" containerID="e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.607968 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb"} err="failed to get container status \"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb\": rpc error: code = NotFound desc = could not find container \"e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb\": container with ID starting with e9851369fe87abdfd3439cdba5e64ae4259c0ee55e6e212a5674a12f4f7f04eb not found: ID does not exist" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.607981 4721 scope.go:117] "RemoveContainer" containerID="8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b" Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.608343 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b\": container with ID starting with 8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b not found: ID does not exist" containerID="8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.608366 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b"} err="failed to get container status \"8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b\": rpc error: code = NotFound desc = could not find container \"8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b\": container with ID starting with 8e8e88c6705f054ed85bbd287815dbf15b6f91151211a88315ab51ece516163b not found: ID does not exist" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.768299 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.768992 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="extract-content" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.769011 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="extract-content" Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.769045 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="registry-server" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.769053 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="registry-server" Feb 02 14:18:31 crc kubenswrapper[4721]: E0202 14:18:31.769086 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="extract-utilities" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.769096 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="extract-utilities" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.769413 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" containerName="registry-server" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.771701 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.796110 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qwd\" (UniqueName: \"kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.796219 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.796386 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.802122 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.899296 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.899777 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.899788 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2qwd\" (UniqueName: \"kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.899863 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.900169 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:31 crc kubenswrapper[4721]: I0202 14:18:31.932179 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2qwd\" (UniqueName: \"kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd\") pod \"redhat-operators-wmglb\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:32 crc kubenswrapper[4721]: I0202 14:18:32.105030 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:32 crc kubenswrapper[4721]: I0202 14:18:32.426880 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b23ea8-3d9e-4f03-8331-6ec86d0f4760" path="/var/lib/kubelet/pods/b3b23ea8-3d9e-4f03-8331-6ec86d0f4760/volumes" Feb 02 14:18:32 crc kubenswrapper[4721]: I0202 14:18:32.620834 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:18:33 crc kubenswrapper[4721]: I0202 14:18:33.410577 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:18:33 crc kubenswrapper[4721]: E0202 14:18:33.411457 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:18:33 crc kubenswrapper[4721]: I0202 14:18:33.538920 4721 generic.go:334] "Generic (PLEG): container finished" podID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerID="34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6" exitCode=0 Feb 02 14:18:33 crc kubenswrapper[4721]: I0202 14:18:33.538986 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerDied","Data":"34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6"} Feb 02 14:18:33 crc kubenswrapper[4721]: I0202 14:18:33.539020 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerStarted","Data":"c596e828defbc279db6bc8099dcdc050b683b9191c7efd81b06007f36b53c227"} Feb 02 14:18:35 crc kubenswrapper[4721]: I0202 14:18:35.560582 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerStarted","Data":"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7"} Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.040028 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.043301 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.050815 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.056216 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8t7m\" (UniqueName: \"kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.056287 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.056328 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.170116 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8t7m\" (UniqueName: \"kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.171271 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.170289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.175248 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.175843 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.191019 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8t7m\" (UniqueName: \"kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m\") pod \"community-operators-m8lnf\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:38 crc kubenswrapper[4721]: I0202 14:18:38.370938 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:39 crc kubenswrapper[4721]: I0202 14:18:39.011983 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:39 crc kubenswrapper[4721]: I0202 14:18:39.600620 4721 generic.go:334] "Generic (PLEG): container finished" podID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerID="71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b" exitCode=0 Feb 02 14:18:39 crc kubenswrapper[4721]: I0202 14:18:39.600763 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerDied","Data":"71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b"} Feb 02 14:18:39 crc kubenswrapper[4721]: I0202 14:18:39.600913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerStarted","Data":"3f21840e861b7563f2da3688dda64fc46f9019980ecde86841aff67af7b87fa8"} Feb 02 14:18:40 crc kubenswrapper[4721]: I0202 14:18:40.613723 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerStarted","Data":"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95"} Feb 02 14:18:40 crc kubenswrapper[4721]: I0202 14:18:40.617274 4721 generic.go:334] "Generic (PLEG): container finished" podID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerID="32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7" exitCode=0 Feb 02 14:18:40 crc kubenswrapper[4721]: I0202 14:18:40.617317 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerDied","Data":"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7"} Feb 02 14:18:41 crc kubenswrapper[4721]: I0202 14:18:41.628892 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerStarted","Data":"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b"} Feb 02 14:18:41 crc kubenswrapper[4721]: I0202 14:18:41.657444 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wmglb" podStartSLOduration=2.924546854 podStartE2EDuration="10.657424225s" podCreationTimestamp="2026-02-02 14:18:31 +0000 UTC" firstStartedPulling="2026-02-02 14:18:33.541965903 +0000 UTC m=+4653.844480302" lastFinishedPulling="2026-02-02 14:18:41.274843284 +0000 UTC m=+4661.577357673" observedRunningTime="2026-02-02 14:18:41.64838612 +0000 UTC m=+4661.950900519" watchObservedRunningTime="2026-02-02 14:18:41.657424225 +0000 UTC m=+4661.959938614" Feb 02 14:18:42 crc kubenswrapper[4721]: I0202 14:18:42.105703 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:42 crc kubenswrapper[4721]: I0202 14:18:42.105768 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:18:42 crc kubenswrapper[4721]: I0202 14:18:42.644513 4721 generic.go:334] "Generic (PLEG): container finished" podID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerID="a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95" exitCode=0 Feb 02 14:18:42 crc kubenswrapper[4721]: I0202 14:18:42.644591 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerDied","Data":"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95"} Feb 02 14:18:43 crc kubenswrapper[4721]: I0202 14:18:43.241450 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmglb" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" probeResult="failure" output=< Feb 02 14:18:43 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:18:43 crc kubenswrapper[4721]: > Feb 02 14:18:43 crc kubenswrapper[4721]: I0202 14:18:43.655461 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerStarted","Data":"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074"} Feb 02 14:18:43 crc kubenswrapper[4721]: I0202 14:18:43.682197 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8lnf" podStartSLOduration=3.228348989 podStartE2EDuration="6.682182444s" podCreationTimestamp="2026-02-02 14:18:37 +0000 UTC" firstStartedPulling="2026-02-02 14:18:39.603501838 +0000 UTC m=+4659.906016227" lastFinishedPulling="2026-02-02 14:18:43.057335293 +0000 UTC m=+4663.359849682" observedRunningTime="2026-02-02 14:18:43.679895602 +0000 UTC m=+4663.982409991" watchObservedRunningTime="2026-02-02 14:18:43.682182444 +0000 UTC m=+4663.984696833" Feb 02 14:18:44 crc kubenswrapper[4721]: I0202 14:18:44.409903 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:18:44 crc kubenswrapper[4721]: E0202 14:18:44.410265 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:18:48 crc kubenswrapper[4721]: I0202 14:18:48.371320 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:48 crc kubenswrapper[4721]: I0202 14:18:48.372849 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:48 crc kubenswrapper[4721]: I0202 14:18:48.920770 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:48 crc kubenswrapper[4721]: I0202 14:18:48.971659 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:49 crc kubenswrapper[4721]: I0202 14:18:49.160955 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:50 crc kubenswrapper[4721]: I0202 14:18:50.715301 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8lnf" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="registry-server" containerID="cri-o://18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074" gracePeriod=2 Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.242292 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.344358 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8t7m\" (UniqueName: \"kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m\") pod \"8bdddc4b-0476-4729-9d40-838e53a75e9f\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.344530 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities\") pod \"8bdddc4b-0476-4729-9d40-838e53a75e9f\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.344679 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content\") pod \"8bdddc4b-0476-4729-9d40-838e53a75e9f\" (UID: \"8bdddc4b-0476-4729-9d40-838e53a75e9f\") " Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.345681 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities" (OuterVolumeSpecName: "utilities") pod "8bdddc4b-0476-4729-9d40-838e53a75e9f" (UID: "8bdddc4b-0476-4729-9d40-838e53a75e9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.350006 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m" (OuterVolumeSpecName: "kube-api-access-v8t7m") pod "8bdddc4b-0476-4729-9d40-838e53a75e9f" (UID: "8bdddc4b-0476-4729-9d40-838e53a75e9f"). InnerVolumeSpecName "kube-api-access-v8t7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.398353 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bdddc4b-0476-4729-9d40-838e53a75e9f" (UID: "8bdddc4b-0476-4729-9d40-838e53a75e9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.452150 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.452177 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8t7m\" (UniqueName: \"kubernetes.io/projected/8bdddc4b-0476-4729-9d40-838e53a75e9f-kube-api-access-v8t7m\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.452190 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdddc4b-0476-4729-9d40-838e53a75e9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.730655 4721 generic.go:334] "Generic (PLEG): container finished" podID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerID="18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074" exitCode=0 Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.731465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerDied","Data":"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074"} Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.731511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8lnf" event={"ID":"8bdddc4b-0476-4729-9d40-838e53a75e9f","Type":"ContainerDied","Data":"3f21840e861b7563f2da3688dda64fc46f9019980ecde86841aff67af7b87fa8"} Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.731531 4721 scope.go:117] "RemoveContainer" containerID="18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.731688 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8lnf" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.762235 4721 scope.go:117] "RemoveContainer" containerID="a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.772875 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.784097 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8lnf"] Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.791468 4721 scope.go:117] "RemoveContainer" containerID="71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.846563 4721 scope.go:117] "RemoveContainer" containerID="18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074" Feb 02 14:18:51 crc kubenswrapper[4721]: E0202 14:18:51.847244 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074\": container with ID starting with 18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074 not found: ID does not exist" containerID="18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.847330 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074"} err="failed to get container status \"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074\": rpc error: code = NotFound desc = could not find container \"18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074\": container with ID starting with 18f97b0b44f86ef13cc59abe3aab035bf2f3370b9f54c3d9fa9d566914b83074 not found: ID does not exist" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.847382 4721 scope.go:117] "RemoveContainer" containerID="a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95" Feb 02 14:18:51 crc kubenswrapper[4721]: E0202 14:18:51.847796 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95\": container with ID starting with a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95 not found: ID does not exist" containerID="a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.847829 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95"} err="failed to get container status \"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95\": rpc error: code = NotFound desc = could not find container \"a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95\": container with ID starting with a4cabbd58b03ffc78d865bfc47dd67e205be5cb898bd874018920d079acaae95 not found: ID does not exist" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.847857 4721 scope.go:117] "RemoveContainer" containerID="71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b" Feb 02 14:18:51 crc kubenswrapper[4721]: E0202 14:18:51.848540 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b\": container with ID starting with 71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b not found: ID does not exist" containerID="71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b" Feb 02 14:18:51 crc kubenswrapper[4721]: I0202 14:18:51.848569 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b"} err="failed to get container status \"71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b\": rpc error: code = NotFound desc = could not find container \"71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b\": container with ID starting with 71d9066b6279724babc25c954a9a5748114dfc922de8badfc942f10b62352c3b not found: ID does not exist" Feb 02 14:18:52 crc kubenswrapper[4721]: I0202 14:18:52.425436 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" path="/var/lib/kubelet/pods/8bdddc4b-0476-4729-9d40-838e53a75e9f/volumes" Feb 02 14:18:53 crc kubenswrapper[4721]: I0202 14:18:53.153321 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmglb" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" probeResult="failure" output=< Feb 02 14:18:53 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:18:53 crc kubenswrapper[4721]: > Feb 02 14:18:57 crc kubenswrapper[4721]: I0202 14:18:57.409678 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:18:57 crc kubenswrapper[4721]: E0202 14:18:57.410304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:19:03 crc kubenswrapper[4721]: I0202 14:19:03.172894 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmglb" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" probeResult="failure" output=< Feb 02 14:19:03 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:19:03 crc kubenswrapper[4721]: > Feb 02 14:19:08 crc kubenswrapper[4721]: I0202 14:19:08.410339 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:19:08 crc kubenswrapper[4721]: E0202 14:19:08.411476 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:19:13 crc kubenswrapper[4721]: I0202 14:19:13.151373 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmglb" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" probeResult="failure" output=< Feb 02 14:19:13 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:19:13 crc kubenswrapper[4721]: > Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.433489 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:18 crc kubenswrapper[4721]: E0202 14:19:18.434787 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="registry-server" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.434806 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="registry-server" Feb 02 14:19:18 crc kubenswrapper[4721]: E0202 14:19:18.434836 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="extract-content" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.434845 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="extract-content" Feb 02 14:19:18 crc kubenswrapper[4721]: E0202 14:19:18.434892 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="extract-utilities" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.434902 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="extract-utilities" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.435228 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdddc4b-0476-4729-9d40-838e53a75e9f" containerName="registry-server" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.437450 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.447471 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.556474 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxbm\" (UniqueName: \"kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.556688 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.556834 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.659395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.659452 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.659585 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxbm\" (UniqueName: \"kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.660027 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.660240 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.680959 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxbm\" (UniqueName: \"kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm\") pod \"redhat-marketplace-hrx6w\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:18 crc kubenswrapper[4721]: I0202 14:19:18.761967 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:19 crc kubenswrapper[4721]: I0202 14:19:19.338502 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:20 crc kubenswrapper[4721]: I0202 14:19:20.112412 4721 generic.go:334] "Generic (PLEG): container finished" podID="3005aa34-5adf-43d0-90b8-82f91624d082" containerID="ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be" exitCode=0 Feb 02 14:19:20 crc kubenswrapper[4721]: I0202 14:19:20.112448 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerDied","Data":"ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be"} Feb 02 14:19:20 crc kubenswrapper[4721]: I0202 14:19:20.112473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerStarted","Data":"c90699de105f06a3330b3309c058512dd68a76e320f7810c34d0e844de217906"} Feb 02 14:19:21 crc kubenswrapper[4721]: I0202 14:19:21.130801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerStarted","Data":"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2"} Feb 02 14:19:22 crc kubenswrapper[4721]: I0202 14:19:22.155032 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:19:22 crc kubenswrapper[4721]: I0202 14:19:22.211309 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:19:22 crc kubenswrapper[4721]: I0202 14:19:22.410044 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:19:22 crc kubenswrapper[4721]: E0202 14:19:22.410669 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:19:23 crc kubenswrapper[4721]: I0202 14:19:23.156544 4721 generic.go:334] "Generic (PLEG): container finished" podID="3005aa34-5adf-43d0-90b8-82f91624d082" containerID="b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2" exitCode=0 Feb 02 14:19:23 crc kubenswrapper[4721]: I0202 14:19:23.156626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerDied","Data":"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2"} Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.175807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerStarted","Data":"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87"} Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.202095 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.202409 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wmglb" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" containerID="cri-o://5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b" gracePeriod=2 Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.222537 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrx6w" podStartSLOduration=2.7550519700000002 podStartE2EDuration="6.222515063s" podCreationTimestamp="2026-02-02 14:19:18 +0000 UTC" firstStartedPulling="2026-02-02 14:19:20.115149673 +0000 UTC m=+4700.417664102" lastFinishedPulling="2026-02-02 14:19:23.582612806 +0000 UTC m=+4703.885127195" observedRunningTime="2026-02-02 14:19:24.221193158 +0000 UTC m=+4704.523707557" watchObservedRunningTime="2026-02-02 14:19:24.222515063 +0000 UTC m=+4704.525029472" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.804279 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.840949 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2qwd\" (UniqueName: \"kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd\") pod \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.841130 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities\") pod \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.841178 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content\") pod \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\" (UID: \"263f27d2-5f52-41cf-9ff9-62bd2b195df4\") " Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.843453 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities" (OuterVolumeSpecName: "utilities") pod "263f27d2-5f52-41cf-9ff9-62bd2b195df4" (UID: "263f27d2-5f52-41cf-9ff9-62bd2b195df4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.851036 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd" (OuterVolumeSpecName: "kube-api-access-z2qwd") pod "263f27d2-5f52-41cf-9ff9-62bd2b195df4" (UID: "263f27d2-5f52-41cf-9ff9-62bd2b195df4"). InnerVolumeSpecName "kube-api-access-z2qwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.943723 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2qwd\" (UniqueName: \"kubernetes.io/projected/263f27d2-5f52-41cf-9ff9-62bd2b195df4-kube-api-access-z2qwd\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.943758 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:24 crc kubenswrapper[4721]: I0202 14:19:24.968416 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "263f27d2-5f52-41cf-9ff9-62bd2b195df4" (UID: "263f27d2-5f52-41cf-9ff9-62bd2b195df4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.046887 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f27d2-5f52-41cf-9ff9-62bd2b195df4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.189937 4721 generic.go:334] "Generic (PLEG): container finished" podID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerID="5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b" exitCode=0 Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.189993 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmglb" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.189983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerDied","Data":"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b"} Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.190381 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmglb" event={"ID":"263f27d2-5f52-41cf-9ff9-62bd2b195df4","Type":"ContainerDied","Data":"c596e828defbc279db6bc8099dcdc050b683b9191c7efd81b06007f36b53c227"} Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.190407 4721 scope.go:117] "RemoveContainer" containerID="5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.228773 4721 scope.go:117] "RemoveContainer" containerID="32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.235613 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.255524 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wmglb"] Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.265507 4721 scope.go:117] "RemoveContainer" containerID="34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6" Feb 02 14:19:25 crc kubenswrapper[4721]: E0202 14:19:25.321542 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263f27d2_5f52_41cf_9ff9_62bd2b195df4.slice/crio-c596e828defbc279db6bc8099dcdc050b683b9191c7efd81b06007f36b53c227\": RecentStats: unable to find data in memory cache]" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.328722 4721 scope.go:117] "RemoveContainer" containerID="5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b" Feb 02 14:19:25 crc kubenswrapper[4721]: E0202 14:19:25.329201 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b\": container with ID starting with 5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b not found: ID does not exist" containerID="5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.329256 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b"} err="failed to get container status \"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b\": rpc error: code = NotFound desc = could not find container \"5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b\": container with ID starting with 5bd7359f565ae41c756b8b3ce5bf851cdd66a9eb0ea910770311cd6e1d283e4b not found: ID does not exist" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.329278 4721 scope.go:117] "RemoveContainer" containerID="32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7" Feb 02 14:19:25 crc kubenswrapper[4721]: E0202 14:19:25.329673 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7\": container with ID starting with 32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7 not found: ID does not exist" containerID="32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.329699 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7"} err="failed to get container status \"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7\": rpc error: code = NotFound desc = could not find container \"32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7\": container with ID starting with 32274dd2adabed2a33935a26e58c08e0355e6b861fe39345b34b7d33608acbb7 not found: ID does not exist" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.329717 4721 scope.go:117] "RemoveContainer" containerID="34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6" Feb 02 14:19:25 crc kubenswrapper[4721]: E0202 14:19:25.329957 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6\": container with ID starting with 34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6 not found: ID does not exist" containerID="34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6" Feb 02 14:19:25 crc kubenswrapper[4721]: I0202 14:19:25.329971 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6"} err="failed to get container status \"34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6\": rpc error: code = NotFound desc = could not find container \"34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6\": container with ID starting with 34d8e37ac136c048e14db0e26d0ed848b873e987fc398b35a174771fee1eaac6 not found: ID does not exist" Feb 02 14:19:26 crc kubenswrapper[4721]: I0202 14:19:26.433329 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" path="/var/lib/kubelet/pods/263f27d2-5f52-41cf-9ff9-62bd2b195df4/volumes" Feb 02 14:19:28 crc kubenswrapper[4721]: I0202 14:19:28.762755 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:28 crc kubenswrapper[4721]: I0202 14:19:28.764184 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:28 crc kubenswrapper[4721]: I0202 14:19:28.809778 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:29 crc kubenswrapper[4721]: I0202 14:19:29.278802 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:29 crc kubenswrapper[4721]: I0202 14:19:29.991684 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.253334 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrx6w" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="registry-server" containerID="cri-o://16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87" gracePeriod=2 Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.880996 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.916113 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities\") pod \"3005aa34-5adf-43d0-90b8-82f91624d082\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.917013 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities" (OuterVolumeSpecName: "utilities") pod "3005aa34-5adf-43d0-90b8-82f91624d082" (UID: "3005aa34-5adf-43d0-90b8-82f91624d082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.917178 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content\") pod \"3005aa34-5adf-43d0-90b8-82f91624d082\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.917283 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blxbm\" (UniqueName: \"kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm\") pod \"3005aa34-5adf-43d0-90b8-82f91624d082\" (UID: \"3005aa34-5adf-43d0-90b8-82f91624d082\") " Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.918779 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.924418 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm" (OuterVolumeSpecName: "kube-api-access-blxbm") pod "3005aa34-5adf-43d0-90b8-82f91624d082" (UID: "3005aa34-5adf-43d0-90b8-82f91624d082"). InnerVolumeSpecName "kube-api-access-blxbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:19:31 crc kubenswrapper[4721]: I0202 14:19:31.940693 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3005aa34-5adf-43d0-90b8-82f91624d082" (UID: "3005aa34-5adf-43d0-90b8-82f91624d082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.021199 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005aa34-5adf-43d0-90b8-82f91624d082-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.021231 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blxbm\" (UniqueName: \"kubernetes.io/projected/3005aa34-5adf-43d0-90b8-82f91624d082-kube-api-access-blxbm\") on node \"crc\" DevicePath \"\"" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.262431 4721 generic.go:334] "Generic (PLEG): container finished" podID="3005aa34-5adf-43d0-90b8-82f91624d082" containerID="16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87" exitCode=0 Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.262468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerDied","Data":"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87"} Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.262494 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrx6w" event={"ID":"3005aa34-5adf-43d0-90b8-82f91624d082","Type":"ContainerDied","Data":"c90699de105f06a3330b3309c058512dd68a76e320f7810c34d0e844de217906"} Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.262509 4721 scope.go:117] "RemoveContainer" containerID="16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.262824 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrx6w" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.293720 4721 scope.go:117] "RemoveContainer" containerID="b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.299766 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.308556 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrx6w"] Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.328323 4721 scope.go:117] "RemoveContainer" containerID="ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.377653 4721 scope.go:117] "RemoveContainer" containerID="16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87" Feb 02 14:19:32 crc kubenswrapper[4721]: E0202 14:19:32.378203 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87\": container with ID starting with 16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87 not found: ID does not exist" containerID="16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.378233 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87"} err="failed to get container status \"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87\": rpc error: code = NotFound desc = could not find container \"16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87\": container with ID starting with 16cf2adabf3bb01467ce25195f0e1b9aafc8eb968866b3449ec06bbbe287ee87 not found: ID does not exist" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.378253 4721 scope.go:117] "RemoveContainer" containerID="b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2" Feb 02 14:19:32 crc kubenswrapper[4721]: E0202 14:19:32.378564 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2\": container with ID starting with b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2 not found: ID does not exist" containerID="b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.378625 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2"} err="failed to get container status \"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2\": rpc error: code = NotFound desc = could not find container \"b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2\": container with ID starting with b7c5d3e0cc3de24c3af77da64cc04f012d62a7c8cb308e3619dc72c6677d70d2 not found: ID does not exist" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.378664 4721 scope.go:117] "RemoveContainer" containerID="ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be" Feb 02 14:19:32 crc kubenswrapper[4721]: E0202 14:19:32.378988 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be\": container with ID starting with ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be not found: ID does not exist" containerID="ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.379014 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be"} err="failed to get container status \"ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be\": rpc error: code = NotFound desc = could not find container \"ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be\": container with ID starting with ad82f58e137f8b92244c3701236f7ce0785115f9c6d9bd8ea8baa43e82f9a1be not found: ID does not exist" Feb 02 14:19:32 crc kubenswrapper[4721]: I0202 14:19:32.423276 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" path="/var/lib/kubelet/pods/3005aa34-5adf-43d0-90b8-82f91624d082/volumes" Feb 02 14:19:34 crc kubenswrapper[4721]: I0202 14:19:34.410531 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:19:34 crc kubenswrapper[4721]: E0202 14:19:34.410828 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:19:49 crc kubenswrapper[4721]: I0202 14:19:49.410564 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:19:49 crc kubenswrapper[4721]: E0202 14:19:49.411498 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:20:00 crc kubenswrapper[4721]: I0202 14:20:00.422050 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:20:00 crc kubenswrapper[4721]: E0202 14:20:00.423084 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:20:11 crc kubenswrapper[4721]: I0202 14:20:11.410861 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:20:11 crc kubenswrapper[4721]: E0202 14:20:11.411848 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:20:23 crc kubenswrapper[4721]: I0202 14:20:23.410212 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:20:23 crc kubenswrapper[4721]: E0202 14:20:23.411101 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:20:35 crc kubenswrapper[4721]: I0202 14:20:35.410033 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:20:35 crc kubenswrapper[4721]: E0202 14:20:35.411099 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:20:50 crc kubenswrapper[4721]: I0202 14:20:50.429959 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:20:50 crc kubenswrapper[4721]: E0202 14:20:50.431801 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:21:02 crc kubenswrapper[4721]: I0202 14:21:02.410226 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:21:02 crc kubenswrapper[4721]: E0202 14:21:02.411719 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:21:14 crc kubenswrapper[4721]: I0202 14:21:14.410844 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:21:14 crc kubenswrapper[4721]: E0202 14:21:14.412018 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:21:29 crc kubenswrapper[4721]: I0202 14:21:29.410710 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:21:29 crc kubenswrapper[4721]: E0202 14:21:29.413591 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:21:41 crc kubenswrapper[4721]: I0202 14:21:41.411336 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:21:41 crc kubenswrapper[4721]: E0202 14:21:41.412034 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:21:53 crc kubenswrapper[4721]: I0202 14:21:53.410931 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:21:53 crc kubenswrapper[4721]: E0202 14:21:53.412201 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:22:08 crc kubenswrapper[4721]: I0202 14:22:08.410168 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:22:08 crc kubenswrapper[4721]: E0202 14:22:08.411158 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:22:23 crc kubenswrapper[4721]: I0202 14:22:23.410572 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:22:24 crc kubenswrapper[4721]: I0202 14:22:24.111707 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec"} Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.121877 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d2dxv/must-gather-kr4jl"] Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123193 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="extract-content" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123213 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="extract-content" Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123241 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="extract-content" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123250 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="extract-content" Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123263 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123273 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123290 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123298 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123312 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="extract-utilities" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123320 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="extract-utilities" Feb 02 14:22:28 crc kubenswrapper[4721]: E0202 14:22:28.123347 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="extract-utilities" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123355 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="extract-utilities" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123664 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3005aa34-5adf-43d0-90b8-82f91624d082" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.123679 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="263f27d2-5f52-41cf-9ff9-62bd2b195df4" containerName="registry-server" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.125824 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.129210 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d2dxv"/"openshift-service-ca.crt" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.129556 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-d2dxv"/"default-dockercfg-86gxd" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.134609 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d2dxv"/"kube-root-ca.crt" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.139312 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d2dxv/must-gather-kr4jl"] Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.192457 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v576\" (UniqueName: \"kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.192909 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.295417 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.295526 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v576\" (UniqueName: \"kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.296055 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.325756 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v576\" (UniqueName: \"kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576\") pod \"must-gather-kr4jl\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.450985 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:22:28 crc kubenswrapper[4721]: I0202 14:22:28.959389 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d2dxv/must-gather-kr4jl"] Feb 02 14:22:29 crc kubenswrapper[4721]: I0202 14:22:29.176277 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" event={"ID":"56b96222-739f-41d2-996e-14e2ee91a139","Type":"ContainerStarted","Data":"58e1e916eac833a2da01583a92d764fb1598205670f01f6a944e39aa58fc1c62"} Feb 02 14:22:34 crc kubenswrapper[4721]: I0202 14:22:34.228403 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" event={"ID":"56b96222-739f-41d2-996e-14e2ee91a139","Type":"ContainerStarted","Data":"79dee8f469c998ce3c1779dff4af47dcba83618a5b09996aa6983b70d3d26f3f"} Feb 02 14:22:34 crc kubenswrapper[4721]: I0202 14:22:34.228900 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" event={"ID":"56b96222-739f-41d2-996e-14e2ee91a139","Type":"ContainerStarted","Data":"ee046980151a1ce898253bd863e95503c1103f7eed2587fbfa57c95be0495a2a"} Feb 02 14:22:34 crc kubenswrapper[4721]: I0202 14:22:34.249015 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" podStartSLOduration=1.9911042060000002 podStartE2EDuration="6.248995801s" podCreationTimestamp="2026-02-02 14:22:28 +0000 UTC" firstStartedPulling="2026-02-02 14:22:28.976372389 +0000 UTC m=+4889.278886778" lastFinishedPulling="2026-02-02 14:22:33.234263984 +0000 UTC m=+4893.536778373" observedRunningTime="2026-02-02 14:22:34.242888477 +0000 UTC m=+4894.545402886" watchObservedRunningTime="2026-02-02 14:22:34.248995801 +0000 UTC m=+4894.551510190" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.153000 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-nsq7x"] Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.155118 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.182115 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.182238 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk22m\" (UniqueName: \"kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.284973 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.285098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk22m\" (UniqueName: \"kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.285128 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.304337 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk22m\" (UniqueName: \"kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m\") pod \"crc-debug-nsq7x\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:39 crc kubenswrapper[4721]: I0202 14:22:39.477783 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:22:40 crc kubenswrapper[4721]: I0202 14:22:40.298514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" event={"ID":"08510db5-eac2-4005-abe5-4cb8bb7604dc","Type":"ContainerStarted","Data":"3a24955b413585c5b93167ef896b4245df4484e81497f3b0e51abc71423c9d5c"} Feb 02 14:22:53 crc kubenswrapper[4721]: I0202 14:22:53.449623 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" event={"ID":"08510db5-eac2-4005-abe5-4cb8bb7604dc","Type":"ContainerStarted","Data":"3d42ccd527c1c49d2ecc79f70ee1abbc38a002f9404fa0303071492300851127"} Feb 02 14:22:53 crc kubenswrapper[4721]: I0202 14:22:53.477137 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" podStartSLOduration=1.518988344 podStartE2EDuration="14.4771186s" podCreationTimestamp="2026-02-02 14:22:39 +0000 UTC" firstStartedPulling="2026-02-02 14:22:39.514970465 +0000 UTC m=+4899.817484854" lastFinishedPulling="2026-02-02 14:22:52.473100721 +0000 UTC m=+4912.775615110" observedRunningTime="2026-02-02 14:22:53.470460021 +0000 UTC m=+4913.772974420" watchObservedRunningTime="2026-02-02 14:22:53.4771186 +0000 UTC m=+4913.779632999" Feb 02 14:23:15 crc kubenswrapper[4721]: I0202 14:23:15.679286 4721 generic.go:334] "Generic (PLEG): container finished" podID="08510db5-eac2-4005-abe5-4cb8bb7604dc" containerID="3d42ccd527c1c49d2ecc79f70ee1abbc38a002f9404fa0303071492300851127" exitCode=0 Feb 02 14:23:15 crc kubenswrapper[4721]: I0202 14:23:15.679359 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" event={"ID":"08510db5-eac2-4005-abe5-4cb8bb7604dc","Type":"ContainerDied","Data":"3d42ccd527c1c49d2ecc79f70ee1abbc38a002f9404fa0303071492300851127"} Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.825875 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.861826 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-nsq7x"] Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.874871 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-nsq7x"] Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.901167 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host\") pod \"08510db5-eac2-4005-abe5-4cb8bb7604dc\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.901328 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host" (OuterVolumeSpecName: "host") pod "08510db5-eac2-4005-abe5-4cb8bb7604dc" (UID: "08510db5-eac2-4005-abe5-4cb8bb7604dc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.901537 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk22m\" (UniqueName: \"kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m\") pod \"08510db5-eac2-4005-abe5-4cb8bb7604dc\" (UID: \"08510db5-eac2-4005-abe5-4cb8bb7604dc\") " Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.902382 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08510db5-eac2-4005-abe5-4cb8bb7604dc-host\") on node \"crc\" DevicePath \"\"" Feb 02 14:23:16 crc kubenswrapper[4721]: I0202 14:23:16.909060 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m" (OuterVolumeSpecName: "kube-api-access-pk22m") pod "08510db5-eac2-4005-abe5-4cb8bb7604dc" (UID: "08510db5-eac2-4005-abe5-4cb8bb7604dc"). InnerVolumeSpecName "kube-api-access-pk22m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:23:17 crc kubenswrapper[4721]: I0202 14:23:17.004919 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk22m\" (UniqueName: \"kubernetes.io/projected/08510db5-eac2-4005-abe5-4cb8bb7604dc-kube-api-access-pk22m\") on node \"crc\" DevicePath \"\"" Feb 02 14:23:17 crc kubenswrapper[4721]: I0202 14:23:17.726004 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a24955b413585c5b93167ef896b4245df4484e81497f3b0e51abc71423c9d5c" Feb 02 14:23:17 crc kubenswrapper[4721]: I0202 14:23:17.726173 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-nsq7x" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.277471 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-br45z"] Feb 02 14:23:18 crc kubenswrapper[4721]: E0202 14:23:18.278150 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08510db5-eac2-4005-abe5-4cb8bb7604dc" containerName="container-00" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.278162 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="08510db5-eac2-4005-abe5-4cb8bb7604dc" containerName="container-00" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.278392 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="08510db5-eac2-4005-abe5-4cb8bb7604dc" containerName="container-00" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.279255 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.335749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb746\" (UniqueName: \"kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.335862 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.425887 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08510db5-eac2-4005-abe5-4cb8bb7604dc" path="/var/lib/kubelet/pods/08510db5-eac2-4005-abe5-4cb8bb7604dc/volumes" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.437783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.437987 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.438015 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb746\" (UniqueName: \"kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.870981 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb746\" (UniqueName: \"kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746\") pod \"crc-debug-br45z\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:18 crc kubenswrapper[4721]: I0202 14:23:18.901497 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:19 crc kubenswrapper[4721]: I0202 14:23:19.746437 4721 generic.go:334] "Generic (PLEG): container finished" podID="00ff63e7-883d-4e6d-985b-9122517d5081" containerID="c5e695876117098c7aa3acb18071dce73b746830410f10fe488c54040af4236a" exitCode=1 Feb 02 14:23:19 crc kubenswrapper[4721]: I0202 14:23:19.746529 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/crc-debug-br45z" event={"ID":"00ff63e7-883d-4e6d-985b-9122517d5081","Type":"ContainerDied","Data":"c5e695876117098c7aa3acb18071dce73b746830410f10fe488c54040af4236a"} Feb 02 14:23:19 crc kubenswrapper[4721]: I0202 14:23:19.746999 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/crc-debug-br45z" event={"ID":"00ff63e7-883d-4e6d-985b-9122517d5081","Type":"ContainerStarted","Data":"ab857196e7ef95f6d0e888d211bada36de4151ce2298d06ba4d3ef0ee53bfbe5"} Feb 02 14:23:19 crc kubenswrapper[4721]: I0202 14:23:19.788530 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-br45z"] Feb 02 14:23:19 crc kubenswrapper[4721]: I0202 14:23:19.798527 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d2dxv/crc-debug-br45z"] Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.296449 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.407925 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb746\" (UniqueName: \"kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746\") pod \"00ff63e7-883d-4e6d-985b-9122517d5081\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.408046 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host\") pod \"00ff63e7-883d-4e6d-985b-9122517d5081\" (UID: \"00ff63e7-883d-4e6d-985b-9122517d5081\") " Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.408138 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host" (OuterVolumeSpecName: "host") pod "00ff63e7-883d-4e6d-985b-9122517d5081" (UID: "00ff63e7-883d-4e6d-985b-9122517d5081"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.408818 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ff63e7-883d-4e6d-985b-9122517d5081-host\") on node \"crc\" DevicePath \"\"" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.416363 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746" (OuterVolumeSpecName: "kube-api-access-rb746") pod "00ff63e7-883d-4e6d-985b-9122517d5081" (UID: "00ff63e7-883d-4e6d-985b-9122517d5081"). InnerVolumeSpecName "kube-api-access-rb746". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.510777 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb746\" (UniqueName: \"kubernetes.io/projected/00ff63e7-883d-4e6d-985b-9122517d5081-kube-api-access-rb746\") on node \"crc\" DevicePath \"\"" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.768804 4721 scope.go:117] "RemoveContainer" containerID="c5e695876117098c7aa3acb18071dce73b746830410f10fe488c54040af4236a" Feb 02 14:23:21 crc kubenswrapper[4721]: I0202 14:23:21.769551 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/crc-debug-br45z" Feb 02 14:23:22 crc kubenswrapper[4721]: I0202 14:23:22.425124 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ff63e7-883d-4e6d-985b-9122517d5081" path="/var/lib/kubelet/pods/00ff63e7-883d-4e6d-985b-9122517d5081/volumes" Feb 02 14:24:14 crc kubenswrapper[4721]: I0202 14:24:14.530933 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57c58bbb98-gpbp2_183927fe-ec27-461b-8284-3e71f5cb666a/barbican-api/0.log" Feb 02 14:24:14 crc kubenswrapper[4721]: I0202 14:24:14.708017 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57c58bbb98-gpbp2_183927fe-ec27-461b-8284-3e71f5cb666a/barbican-api-log/0.log" Feb 02 14:24:14 crc kubenswrapper[4721]: I0202 14:24:14.723203 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6f4497866b-px6fz_755b5957-fcfa-486a-8e63-d562742d6650/barbican-keystone-listener/0.log" Feb 02 14:24:14 crc kubenswrapper[4721]: I0202 14:24:14.804853 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6f4497866b-px6fz_755b5957-fcfa-486a-8e63-d562742d6650/barbican-keystone-listener-log/0.log" Feb 02 14:24:14 crc kubenswrapper[4721]: I0202 14:24:14.962211 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d4595f9f9-4d2g5_93a7211b-9a15-4765-99e2-520bd1d62ff1/barbican-worker/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.004490 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d4595f9f9-4d2g5_93a7211b-9a15-4765-99e2-520bd1d62ff1/barbican-worker-log/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.408276 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c678f02b-cbee-4578-9e28-067b63af2682/ceilometer-central-agent/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.420355 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c678f02b-cbee-4578-9e28-067b63af2682/proxy-httpd/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.442662 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c678f02b-cbee-4578-9e28-067b63af2682/sg-core/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.467090 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c678f02b-cbee-4578-9e28-067b63af2682/ceilometer-notification-agent/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.674355 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0eabfa0b-0304-4eda-8f8a-dc9160569e4b/cinder-api/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.777188 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0eabfa0b-0304-4eda-8f8a-dc9160569e4b/cinder-api-log/0.log" Feb 02 14:24:15 crc kubenswrapper[4721]: I0202 14:24:15.978540 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7d63f1df-bbdc-42ee-a234-2d691a3ce7ba/cinder-scheduler/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.046371 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7d63f1df-bbdc-42ee-a234-2d691a3ce7ba/probe/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.137182 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fz8n5_1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b/init/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.344002 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fz8n5_1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b/init/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.351158 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b7bbf7cf9-fz8n5_1f4d7a08-5d54-424f-a9a6-e9ddd07b3b9b/dnsmasq-dns/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.440648 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_23e9328e-fd9a-4a87-946b-2c46e25bea51/glance-httpd/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.562191 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_23e9328e-fd9a-4a87-946b-2c46e25bea51/glance-log/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.656226 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5f5129b1-fc26-40ba-9cf7-0f86e93507cd/glance-log/0.log" Feb 02 14:24:16 crc kubenswrapper[4721]: I0202 14:24:16.667052 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5f5129b1-fc26-40ba-9cf7-0f86e93507cd/glance-httpd/0.log" Feb 02 14:24:17 crc kubenswrapper[4721]: I0202 14:24:17.248243 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-596786fd64-rpzql_6616d7ae-f7c6-4fe2-bf68-a0668a84fb8c/heat-engine/0.log" Feb 02 14:24:17 crc kubenswrapper[4721]: I0202 14:24:17.436491 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-86974d69bd-t6gcz_5d1412d5-76f7-4132-889d-f706432b3ecc/heat-api/0.log" Feb 02 14:24:17 crc kubenswrapper[4721]: I0202 14:24:17.498451 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-64c55c4cc7-4htzp_5c23b064-e24b-4ab3-886d-d731004b7479/heat-cfnapi/0.log" Feb 02 14:24:17 crc kubenswrapper[4721]: I0202 14:24:17.884780 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500681-7xs4g_9b53b618-4727-4a17-a000-ba0ccd1084c1/keystone-cron/0.log" Feb 02 14:24:18 crc kubenswrapper[4721]: I0202 14:24:18.028144 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-784866f846-pjz9x_5883cb27-6bc8-4309-aeac-64a54a46eb89/keystone-api/0.log" Feb 02 14:24:18 crc kubenswrapper[4721]: I0202 14:24:18.134007 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ac827915-eefd-428b-9303-581069f92ed8/kube-state-metrics/0.log" Feb 02 14:24:18 crc kubenswrapper[4721]: I0202 14:24:18.433805 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_8abde028-43c5-4489-8de6-7c2da9f037c2/mysqld-exporter/0.log" Feb 02 14:24:18 crc kubenswrapper[4721]: I0202 14:24:18.735866 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cc678f5-fkzpw_40093ddb-a585-427d-88f6-110b4ea07578/neutron-api/0.log" Feb 02 14:24:18 crc kubenswrapper[4721]: I0202 14:24:18.799407 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cc678f5-fkzpw_40093ddb-a585-427d-88f6-110b4ea07578/neutron-httpd/0.log" Feb 02 14:24:19 crc kubenswrapper[4721]: I0202 14:24:19.190802 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8eccca7c-e269-4ecc-9fce-024196f66aaa/nova-api-log/0.log" Feb 02 14:24:19 crc kubenswrapper[4721]: I0202 14:24:19.355022 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_562aee22-e2a0-4706-b65a-7e9398823dec/nova-cell0-conductor-conductor/0.log" Feb 02 14:24:19 crc kubenswrapper[4721]: I0202 14:24:19.500874 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8eccca7c-e269-4ecc-9fce-024196f66aaa/nova-api-api/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.052514 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3830e692-ad9d-48c7-800f-dc63cadb2376/nova-cell1-conductor-conductor/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.147237 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ab8f3d4c-b262-4b71-a934-f584c1f07790/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.298439 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f6e14b26-cab3-4acd-aad2-8cda004e0282/nova-metadata-log/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.608989 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d3e7848b-5b3e-4e6b-8c5e-82cd9f2f7728/nova-scheduler-scheduler/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.659623 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2505bd6b-64d4-4d17-9c1a-0e89562612be/mysql-bootstrap/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.939371 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2505bd6b-64d4-4d17-9c1a-0e89562612be/mysql-bootstrap/0.log" Feb 02 14:24:20 crc kubenswrapper[4721]: I0202 14:24:20.964686 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2505bd6b-64d4-4d17-9c1a-0e89562612be/galera/0.log" Feb 02 14:24:21 crc kubenswrapper[4721]: I0202 14:24:21.197352 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4/mysql-bootstrap/0.log" Feb 02 14:24:21 crc kubenswrapper[4721]: I0202 14:24:21.446583 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4/mysql-bootstrap/0.log" Feb 02 14:24:21 crc kubenswrapper[4721]: I0202 14:24:21.499817 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f5bedd25-b07d-44e9-a7e5-cfa263ad4fa4/galera/0.log" Feb 02 14:24:21 crc kubenswrapper[4721]: I0202 14:24:21.658570 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_32729b18-a175-4abd-a8cf-392d318b64d8/openstackclient/0.log" Feb 02 14:24:21 crc kubenswrapper[4721]: I0202 14:24:21.788813 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l5h78_298ac2ef-6edb-40cb-bb92-8a8e039f333b/ovn-controller/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.041099 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hkwkv_753a63ae-970e-4dd1-a284-bc3b6027ca64/openstack-network-exporter/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.182763 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f6e14b26-cab3-4acd-aad2-8cda004e0282/nova-metadata-metadata/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.191566 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gz9nz_a75df612-e3f4-4ea3-bfc8-daceaf59205d/ovsdb-server-init/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.413399 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gz9nz_a75df612-e3f4-4ea3-bfc8-daceaf59205d/ovsdb-server/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.479913 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gz9nz_a75df612-e3f4-4ea3-bfc8-daceaf59205d/ovsdb-server-init/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.495028 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gz9nz_a75df612-e3f4-4ea3-bfc8-daceaf59205d/ovs-vswitchd/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.626468 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cd5938c1-e4b9-4437-a379-c25bc5b1c243/openstack-network-exporter/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.755808 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cd5938c1-e4b9-4437-a379-c25bc5b1c243/ovn-northd/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.872626 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_080bfc29-50bc-4ba1-b097-4f5c54586d8c/openstack-network-exporter/0.log" Feb 02 14:24:22 crc kubenswrapper[4721]: I0202 14:24:22.908355 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_080bfc29-50bc-4ba1-b097-4f5c54586d8c/ovsdbserver-nb/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.045665 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4e175d27-fe10-4fb7-9ce6-cb98379357cc/openstack-network-exporter/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.194554 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4e175d27-fe10-4fb7-9ce6-cb98379357cc/ovsdbserver-sb/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.345897 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6ccdcdf5fb-gncnr_8e3f4574-6ad6-4b37-abf5-2005c8692a44/placement-api/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.434117 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6ccdcdf5fb-gncnr_8e3f4574-6ad6-4b37-abf5-2005c8692a44/placement-log/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.449416 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6a34d077-087f-4b04-98c5-22e09450dcb3/init-config-reloader/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.937079 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6a34d077-087f-4b04-98c5-22e09450dcb3/init-config-reloader/0.log" Feb 02 14:24:23 crc kubenswrapper[4721]: I0202 14:24:23.976358 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6a34d077-087f-4b04-98c5-22e09450dcb3/thanos-sidecar/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.010044 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6a34d077-087f-4b04-98c5-22e09450dcb3/config-reloader/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.023647 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6a34d077-087f-4b04-98c5-22e09450dcb3/prometheus/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.269390 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_496bb19e-217b-4896-9bee-8082ac5da28b/setup-container/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.403561 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_496bb19e-217b-4896-9bee-8082ac5da28b/setup-container/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.435821 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_496bb19e-217b-4896-9bee-8082ac5da28b/rabbitmq/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.527682 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a57cea33-806c-4028-b59f-9f5e65289eac/setup-container/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.774818 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a57cea33-806c-4028-b59f-9f5e65289eac/setup-container/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.811028 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a57cea33-806c-4028-b59f-9f5e65289eac/rabbitmq/0.log" Feb 02 14:24:24 crc kubenswrapper[4721]: I0202 14:24:24.875987 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d21d961-1540-4610-89c0-ee265f66d728/setup-container/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.157748 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b6dbd607-3fa8-48e0-b420-4e939a47c460/setup-container/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.177890 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d21d961-1540-4610-89c0-ee265f66d728/setup-container/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.198130 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d21d961-1540-4610-89c0-ee265f66d728/rabbitmq/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.452036 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b6dbd607-3fa8-48e0-b420-4e939a47c460/setup-container/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.470699 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b6dbd607-3fa8-48e0-b420-4e939a47c460/rabbitmq/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.615157 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9b87bd57c-2glsn_c04183e6-a1f0-4d8c-aa00-8dd660336a3b/proxy-httpd/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.642406 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9b87bd57c-2glsn_c04183e6-a1f0-4d8c-aa00-8dd660336a3b/proxy-server/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.743882 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4rnrx_bd1f15d5-77dc-4b6d-81bf-c2a8286da820/swift-ring-rebalance/0.log" Feb 02 14:24:25 crc kubenswrapper[4721]: I0202 14:24:25.955245 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/account-auditor/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.010157 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/account-replicator/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.037505 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/account-reaper/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.124518 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/account-server/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.240912 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/container-auditor/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.271505 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/container-replicator/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.293380 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/container-server/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.331745 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/container-updater/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.485133 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/object-expirer/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.485414 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/object-auditor/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.572680 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/object-server/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.608857 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/object-replicator/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.740917 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/object-updater/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.775166 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/rsync/0.log" Feb 02 14:24:26 crc kubenswrapper[4721]: I0202 14:24:26.811101 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eabe6b07-da9d-4980-99b4-12c02640c88d/swift-recon-cron/0.log" Feb 02 14:24:32 crc kubenswrapper[4721]: I0202 14:24:32.932172 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a686ac60-f231-4070-98c7-7acbc66c29d5/memcached/0.log" Feb 02 14:24:44 crc kubenswrapper[4721]: I0202 14:24:44.766434 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:24:44 crc kubenswrapper[4721]: I0202 14:24:44.767045 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:24:56 crc kubenswrapper[4721]: I0202 14:24:56.586718 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/util/0.log" Feb 02 14:24:56 crc kubenswrapper[4721]: I0202 14:24:56.713544 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/util/0.log" Feb 02 14:24:56 crc kubenswrapper[4721]: I0202 14:24:56.773338 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/pull/0.log" Feb 02 14:24:56 crc kubenswrapper[4721]: I0202 14:24:56.796247 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/pull/0.log" Feb 02 14:24:56 crc kubenswrapper[4721]: I0202 14:24:56.984668 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/pull/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.005313 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/util/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.006413 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_37793a4ef2e329483307c3ff79aaabd503a190102fbc19ed8ead42e2cc5zq4n_5c48ead1-c06b-4a13-b92a-ce7a474e6233/extract/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.264354 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-729mv_0562e590-1a66-4fbc-862d-833bc1600eac/manager/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.277677 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-8zlv5_0c1486a5-ee95-4cde-9631-3c7c7aa31ae7/manager/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.473200 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-s75st_23be57b1-6b3e-4346-93f9-2c45b0562d2b/manager/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.570201 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-q5lbf_20f771bf-d003-48b0-8e50-0d1217f24b45/manager/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.767632 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-sq5w5_e5a04e0d-8a73-4f21-a61d-374d7a5784fb/manager/0.log" Feb 02 14:24:57 crc kubenswrapper[4721]: I0202 14:24:57.870344 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-x6p4t_8a86dacc-de73-4b52-994c-3b089ee427cc/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.250153 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-hktcl_9d11c3e4-10b4-4ff4-aaa2-04e342d984b4/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.364091 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-qqpfm_a13f2341-6b53-4a7b-b67a-4a1d1846805d/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.632168 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-5x28t_39686eda-a258-408b-bf9c-7ff7d515ed9d/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.633630 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-5vbh8_b9eeabcd-14ef-4800-9f0c-1a3cd515d2aa/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.871696 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-ct6hc_1c4864d3-2fdd-4b98-ac89-aefb49b56187/manager/0.log" Feb 02 14:24:58 crc kubenswrapper[4721]: I0202 14:24:58.925040 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-sjvjw_2d60d537-ea47-42fa-94c3-61704aef0678/manager/0.log" Feb 02 14:24:59 crc kubenswrapper[4721]: I0202 14:24:59.136007 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-6z258_1f3087b4-acf0-4a27-9696-bdfb4728e96c/manager/0.log" Feb 02 14:24:59 crc kubenswrapper[4721]: I0202 14:24:59.145891 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-42qq8_dc736681-960e-4f76-bc10-25f529da020a/manager/0.log" Feb 02 14:24:59 crc kubenswrapper[4721]: I0202 14:24:59.321221 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtgpkw_ae636942-3520-410e-b70a-b4fc19a527ca/manager/0.log" Feb 02 14:24:59 crc kubenswrapper[4721]: I0202 14:24:59.521785 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b64b9f5cb-mqpbl_e4514067-762e-4638-ad5a-a7d17297bc0d/operator/0.log" Feb 02 14:24:59 crc kubenswrapper[4721]: I0202 14:24:59.823918 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lxqsx_abf13eed-433d-4afa-809d-bd863e469366/registry-server/0.log" Feb 02 14:25:00 crc kubenswrapper[4721]: I0202 14:25:00.069262 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-rzjts_6b33adce-a49a-4ce2-af29-412661aaf062/manager/0.log" Feb 02 14:25:00 crc kubenswrapper[4721]: I0202 14:25:00.229767 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kqdjm_ed67384c-22d3-4466-8990-744b122efbf4/manager/0.log" Feb 02 14:25:00 crc kubenswrapper[4721]: I0202 14:25:00.507250 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d6c59fb84-5s25p_55bc1d80-1d29-4e15-baca-49eee6fd3aa5/manager/0.log" Feb 02 14:25:00 crc kubenswrapper[4721]: I0202 14:25:00.514271 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xwkhz_56b67b2b-b9fd-4353-88e3-d4f1d44653e2/operator/0.log" Feb 02 14:25:01 crc kubenswrapper[4721]: I0202 14:25:01.193453 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-79zrv_499ca4ef-3867-407b-ab4a-64fff307e296/manager/0.log" Feb 02 14:25:01 crc kubenswrapper[4721]: I0202 14:25:01.197490 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-2828d_60ff9309-fd37-4618-b4f0-38704a558ec0/manager/0.log" Feb 02 14:25:01 crc kubenswrapper[4721]: I0202 14:25:01.293447 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5b9ffd7d65-rgkhb_79e5221b-04ee-496d-82b7-16af5b340595/manager/0.log" Feb 02 14:25:01 crc kubenswrapper[4721]: I0202 14:25:01.440289 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4pk6v_058f996d-8009-4f83-864d-177f7b577cf0/manager/0.log" Feb 02 14:25:14 crc kubenswrapper[4721]: I0202 14:25:14.764127 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:25:14 crc kubenswrapper[4721]: I0202 14:25:14.764565 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:25:24 crc kubenswrapper[4721]: I0202 14:25:24.320213 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jdzwk_46f85b66-5a30-4bef-909c-26750b18e72d/control-plane-machine-set-operator/0.log" Feb 02 14:25:24 crc kubenswrapper[4721]: I0202 14:25:24.522579 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pnfph_083f0d8a-e0c4-46ae-8993-8547dd260553/kube-rbac-proxy/0.log" Feb 02 14:25:24 crc kubenswrapper[4721]: I0202 14:25:24.597762 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pnfph_083f0d8a-e0c4-46ae-8993-8547dd260553/machine-api-operator/0.log" Feb 02 14:25:40 crc kubenswrapper[4721]: I0202 14:25:40.169831 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qxmnx_caf3dcb7-c58d-4d36-9329-c9b8d3c354a8/cert-manager-controller/0.log" Feb 02 14:25:40 crc kubenswrapper[4721]: I0202 14:25:40.357869 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-k5vmr_988d3eab-804d-4db0-8855-b63ebbeabce4/cert-manager-cainjector/0.log" Feb 02 14:25:40 crc kubenswrapper[4721]: I0202 14:25:40.432730 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xjhrq_aff1475e-36c5-471a-b04e-01cefc2d2763/cert-manager-webhook/0.log" Feb 02 14:25:44 crc kubenswrapper[4721]: I0202 14:25:44.763558 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:25:44 crc kubenswrapper[4721]: I0202 14:25:44.764201 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:25:44 crc kubenswrapper[4721]: I0202 14:25:44.764248 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:25:44 crc kubenswrapper[4721]: I0202 14:25:44.765132 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:25:44 crc kubenswrapper[4721]: I0202 14:25:44.765205 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec" gracePeriod=600 Feb 02 14:25:45 crc kubenswrapper[4721]: I0202 14:25:45.341722 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec" exitCode=0 Feb 02 14:25:45 crc kubenswrapper[4721]: I0202 14:25:45.341778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec"} Feb 02 14:25:45 crc kubenswrapper[4721]: I0202 14:25:45.342144 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerStarted","Data":"12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a"} Feb 02 14:25:45 crc kubenswrapper[4721]: I0202 14:25:45.342185 4721 scope.go:117] "RemoveContainer" containerID="6f7ac9e0d3ce5a3abf70aaaf35d53db0f85d58208ac3969dbebc6edfb6879ffd" Feb 02 14:25:56 crc kubenswrapper[4721]: I0202 14:25:56.798721 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2f9ms_b15ef257-c4ff-4fd9-a04c-a92d38e51b18/nmstate-console-plugin/0.log" Feb 02 14:25:57 crc kubenswrapper[4721]: I0202 14:25:57.001270 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dlvcq_1cf5f077-bb9b-42de-ab25-70b762c3e2e1/nmstate-handler/0.log" Feb 02 14:25:57 crc kubenswrapper[4721]: I0202 14:25:57.010225 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mmg2n_be1a5420-ea1d-40e0-bd09-241151dc6755/kube-rbac-proxy/0.log" Feb 02 14:25:57 crc kubenswrapper[4721]: I0202 14:25:57.139195 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mmg2n_be1a5420-ea1d-40e0-bd09-241151dc6755/nmstate-metrics/0.log" Feb 02 14:25:57 crc kubenswrapper[4721]: I0202 14:25:57.258987 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-trhxn_38f375ca-8f76-4eb1-a92d-d46f7628ecf6/nmstate-operator/0.log" Feb 02 14:25:57 crc kubenswrapper[4721]: I0202 14:25:57.311786 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j4jzl_92d17aed-5894-45b3-8fe9-08b5dfc7c702/nmstate-webhook/0.log" Feb 02 14:26:14 crc kubenswrapper[4721]: I0202 14:26:14.353513 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-756566789b-zpsf5_af1123f2-fce6-410b-a82b-9b292bb8bf68/manager/0.log" Feb 02 14:26:14 crc kubenswrapper[4721]: I0202 14:26:14.423637 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-756566789b-zpsf5_af1123f2-fce6-410b-a82b-9b292bb8bf68/kube-rbac-proxy/0.log" Feb 02 14:26:30 crc kubenswrapper[4721]: I0202 14:26:30.920520 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jltbt_30a5c0d6-c773-4914-a3b1-1654a51817a9/prometheus-operator/0.log" Feb 02 14:26:31 crc kubenswrapper[4721]: I0202 14:26:31.130468 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_a1f92c36-0e50-485e-a728-7b42f1ab44c4/prometheus-operator-admission-webhook/0.log" Feb 02 14:26:31 crc kubenswrapper[4721]: I0202 14:26:31.183973 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_41c83c75-cfc8-4c33-97cf-484cc7dcd812/prometheus-operator-admission-webhook/0.log" Feb 02 14:26:31 crc kubenswrapper[4721]: I0202 14:26:31.390382 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-b2dk9_7ac0e2d1-4762-4c40-84c9-db0bde4f956f/operator/0.log" Feb 02 14:26:31 crc kubenswrapper[4721]: I0202 14:26:31.417099 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-6lvhx_6064a9a4-2316-4bdd-abf1-934e9167528a/observability-ui-dashboards/0.log" Feb 02 14:26:31 crc kubenswrapper[4721]: I0202 14:26:31.639456 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5w6sx_a3affad2-ab35-4604-8239-56f69bf3727f/perses-operator/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.334492 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-9sfnv_f9c5d281-206d-4729-a031-feb5b9234c8f/cluster-logging-operator/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.543177 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ls7f7_749232df-9bfe-43cb-a716-6eadd2cbc290/collector/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.594233 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_2cb9902a-5fe1-42ee-a659-eebccc3aec15/loki-compactor/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.740909 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-tb6gs_7a392a7d-824d-420d-bf0d-66ca95134ea6/loki-distributor/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.799374 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f86bf5685-lsthj_6bbaf0c4-9bfc-4cf9-b238-4f494e492243/gateway/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.850466 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f86bf5685-lsthj_6bbaf0c4-9bfc-4cf9-b238-4f494e492243/opa/0.log" Feb 02 14:26:50 crc kubenswrapper[4721]: I0202 14:26:50.966615 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f86bf5685-p62nr_e0a2094f-7b9c-426c-b7ea-6a175be407f1/gateway/0.log" Feb 02 14:26:51 crc kubenswrapper[4721]: I0202 14:26:51.006687 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f86bf5685-p62nr_e0a2094f-7b9c-426c-b7ea-6a175be407f1/opa/0.log" Feb 02 14:26:51 crc kubenswrapper[4721]: I0202 14:26:51.140513 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_05f7eb9f-7ce9-4d66-b8e1-cc9eb0c1949e/loki-index-gateway/0.log" Feb 02 14:26:51 crc kubenswrapper[4721]: I0202 14:26:51.938312 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-mnp7c_98490098-f31f-4ee3-9f15-ee37b8740035/loki-querier/0.log" Feb 02 14:26:51 crc kubenswrapper[4721]: I0202 14:26:51.957303 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_b3605888-b0c3-4049-8f6a-cd4f380b91a7/loki-ingester/0.log" Feb 02 14:26:52 crc kubenswrapper[4721]: I0202 14:26:52.150211 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-xs62z_93d83a8b-3334-43f3-b417-58a7fbd7282c/loki-query-frontend/0.log" Feb 02 14:27:06 crc kubenswrapper[4721]: I0202 14:27:06.707811 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rq76j_d8fb94c8-b6a7-47c1-bf64-c01350b47983/kube-rbac-proxy/0.log" Feb 02 14:27:06 crc kubenswrapper[4721]: I0202 14:27:06.996767 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rq76j_d8fb94c8-b6a7-47c1-bf64-c01350b47983/controller/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.073082 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-frr-files/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.241180 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-frr-files/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.246714 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-reloader/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.261341 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-metrics/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.340790 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-reloader/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.527825 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-reloader/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.534599 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-metrics/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.551129 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-frr-files/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.556709 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-metrics/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.778155 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-metrics/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.795991 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-frr-files/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.820404 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/controller/0.log" Feb 02 14:27:07 crc kubenswrapper[4721]: I0202 14:27:07.830738 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/cp-reloader/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.029669 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/frr-metrics/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.054756 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/kube-rbac-proxy/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.097575 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/kube-rbac-proxy-frr/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.291186 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/reloader/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.383752 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4t8pn_4fda33e0-d0a3-4266-aeb1-fc07965d8c35/frr-k8s-webhook-server/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.618791 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67895b6557-xpzcz_4c6e741b-2539-4be0-898c-5fee37f67d21/manager/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.788578 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bb6bc86c6-l2cpd_10a7b124-f250-42d3-9e7c-af29d7204edb/webhook-server/0.log" Feb 02 14:27:08 crc kubenswrapper[4721]: I0202 14:27:08.937436 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hhvl_486fb2e8-15fe-46c1-b62c-89f2b2abf064/kube-rbac-proxy/0.log" Feb 02 14:27:09 crc kubenswrapper[4721]: I0202 14:27:09.415044 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8ts6n_5f685485-23a9-45dd-90cd-62ab47eab713/frr/0.log" Feb 02 14:27:09 crc kubenswrapper[4721]: I0202 14:27:09.695190 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hhvl_486fb2e8-15fe-46c1-b62c-89f2b2abf064/speaker/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.308814 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/util/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.444671 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/util/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.503041 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/pull/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.516778 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/pull/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.770666 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/pull/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.787373 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/util/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.839764 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhtf7_53fd5f54-e7fe-4d86-a5b7-3583e945fff3/extract/0.log" Feb 02 14:27:24 crc kubenswrapper[4721]: I0202 14:27:24.978674 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/util/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.161186 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/pull/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.169680 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/pull/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.239598 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/util/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.438306 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/util/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.446900 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/extract/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.463701 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dck9cgg_6f007b81-04cd-4913-ad24-51aa6c5b60c8/pull/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.643546 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/util/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.807544 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/util/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.814859 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/pull/0.log" Feb 02 14:27:25 crc kubenswrapper[4721]: I0202 14:27:25.854602 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/pull/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.027144 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/util/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.071642 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/extract/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.088752 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360b8hh8n_4b23cf05-2074-4e51-b6ef-235b207d8b16/pull/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.234402 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/util/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.492782 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/util/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.511734 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/pull/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.522957 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/pull/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.648744 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/util/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.695108 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/extract/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.702961 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138j6sl_3d506202-dc87-49f6-9160-ccedb0cbae19/pull/0.log" Feb 02 14:27:26 crc kubenswrapper[4721]: I0202 14:27:26.872619 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/util/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.023576 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/pull/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.025651 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/util/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.063815 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/pull/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.300389 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/util/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.301601 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/extract/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.335142 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ct9cj_6b6f1f89-2c62-4c26-abd3-2d105289fc8c/pull/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.540389 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-utilities/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.742479 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-content/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.756530 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-content/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.758800 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-utilities/0.log" Feb 02 14:27:27 crc kubenswrapper[4721]: I0202 14:27:27.974149 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-utilities/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.001512 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/extract-content/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.287000 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-utilities/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.557511 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-content/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.593980 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-content/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.594331 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-utilities/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.775637 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tgnrr_ff82afec-f54e-4b47-8399-fd27b44558d3/registry-server/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.786758 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-utilities/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.858616 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/extract-content/0.log" Feb 02 14:27:28 crc kubenswrapper[4721]: I0202 14:27:28.995272 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wdnhz_884fbbc4-b86d-4f88-9fc6-2aa2015b81d3/marketplace-operator/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.095765 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-utilities/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.280998 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-utilities/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.329798 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-content/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.412789 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-content/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.568762 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-utilities/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.658468 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/extract-content/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.867736 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-utilities/0.log" Feb 02 14:27:29 crc kubenswrapper[4721]: I0202 14:27:29.895624 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kv46m_be9ad0b8-eef7-451f-82b9-1b5cc54c63c2/registry-server/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.007034 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c64xc_8851d4c5-8c20-440c-bb07-d7542ea1620d/registry-server/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.143812 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-utilities/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.154342 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-content/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.189704 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-content/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.371139 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-utilities/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.371630 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/extract-content/0.log" Feb 02 14:27:30 crc kubenswrapper[4721]: I0202 14:27:30.726967 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6jkv_7625a6ea-aff2-4a16-a62a-fec198126d2f/registry-server/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.148428 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-567c77c7c4-mwhgx_a1f92c36-0e50-485e-a728-7b42f1ab44c4/prometheus-operator-admission-webhook/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.164672 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jltbt_30a5c0d6-c773-4914-a3b1-1654a51817a9/prometheus-operator/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.181673 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-567c77c7c4-zfrs9_41c83c75-cfc8-4c33-97cf-484cc7dcd812/prometheus-operator-admission-webhook/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.357982 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-b2dk9_7ac0e2d1-4762-4c40-84c9-db0bde4f956f/operator/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.390328 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-6lvhx_6064a9a4-2316-4bdd-abf1-934e9167528a/observability-ui-dashboards/0.log" Feb 02 14:27:44 crc kubenswrapper[4721]: I0202 14:27:44.427657 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5w6sx_a3affad2-ab35-4604-8239-56f69bf3727f/perses-operator/0.log" Feb 02 14:27:58 crc kubenswrapper[4721]: I0202 14:27:58.983578 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-756566789b-zpsf5_af1123f2-fce6-410b-a82b-9b292bb8bf68/kube-rbac-proxy/0.log" Feb 02 14:27:59 crc kubenswrapper[4721]: I0202 14:27:59.076296 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-756566789b-zpsf5_af1123f2-fce6-410b-a82b-9b292bb8bf68/manager/0.log" Feb 02 14:28:11 crc kubenswrapper[4721]: E0202 14:28:11.149611 4721 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.247:51496->38.129.56.247:38909: write tcp 38.129.56.247:51496->38.129.56.247:38909: write: broken pipe Feb 02 14:28:14 crc kubenswrapper[4721]: I0202 14:28:14.764207 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:28:14 crc kubenswrapper[4721]: I0202 14:28:14.764841 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:28:44 crc kubenswrapper[4721]: I0202 14:28:44.763351 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:28:44 crc kubenswrapper[4721]: I0202 14:28:44.763986 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.564225 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:28:48 crc kubenswrapper[4721]: E0202 14:28:48.565340 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ff63e7-883d-4e6d-985b-9122517d5081" containerName="container-00" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.565354 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ff63e7-883d-4e6d-985b-9122517d5081" containerName="container-00" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.565572 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ff63e7-883d-4e6d-985b-9122517d5081" containerName="container-00" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.567197 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.574696 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.616729 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.617091 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.617331 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvjj\" (UniqueName: \"kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.719997 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.720209 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.720349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfvjj\" (UniqueName: \"kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.720689 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.720753 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.757122 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.760380 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.761172 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfvjj\" (UniqueName: \"kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj\") pod \"certified-operators-x6zb9\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.773679 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.822630 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcxd\" (UniqueName: \"kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.822710 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.822781 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.890756 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.926262 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcxd\" (UniqueName: \"kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.926632 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.926738 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.927485 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.927500 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:48 crc kubenswrapper[4721]: I0202 14:28:48.954111 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcxd\" (UniqueName: \"kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd\") pod \"community-operators-jrptg\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:49 crc kubenswrapper[4721]: I0202 14:28:49.154865 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:49 crc kubenswrapper[4721]: I0202 14:28:49.719383 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:28:49 crc kubenswrapper[4721]: I0202 14:28:49.945509 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:28:50 crc kubenswrapper[4721]: I0202 14:28:50.551810 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerStarted","Data":"c5d72c4cb5231aaa48bd8c849bdb7a55ce6b86d4b2df1ba54009fff2f035fd3c"} Feb 02 14:28:50 crc kubenswrapper[4721]: I0202 14:28:50.553507 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerStarted","Data":"037261c6a323f6dbd716adcfb2e228053f7afc8794d4b86219c39f4889d652e7"} Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.164380 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.167853 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.179711 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.265781 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.265982 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.266026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrfd\" (UniqueName: \"kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.369102 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.369553 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrfd\" (UniqueName: \"kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.369673 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.369710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.369989 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.391185 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrfd\" (UniqueName: \"kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd\") pod \"redhat-operators-gqkjm\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.511321 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.566475 4721 generic.go:334] "Generic (PLEG): container finished" podID="b2305307-fc37-4522-abb5-6dc428e94e61" containerID="45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477" exitCode=0 Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.566527 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerDied","Data":"45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477"} Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.569435 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.570835 4721 generic.go:334] "Generic (PLEG): container finished" podID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerID="2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5" exitCode=0 Feb 02 14:28:51 crc kubenswrapper[4721]: I0202 14:28:51.570865 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerDied","Data":"2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5"} Feb 02 14:28:52 crc kubenswrapper[4721]: I0202 14:28:52.017826 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:28:52 crc kubenswrapper[4721]: W0202 14:28:52.030507 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3b13ac_f3ee_4b4b_a8eb_365a926869e6.slice/crio-15ed5f7bb533c11c83fb5d5905a7a20f12acc04f97e988e956205502593356c9 WatchSource:0}: Error finding container 15ed5f7bb533c11c83fb5d5905a7a20f12acc04f97e988e956205502593356c9: Status 404 returned error can't find the container with id 15ed5f7bb533c11c83fb5d5905a7a20f12acc04f97e988e956205502593356c9 Feb 02 14:28:52 crc kubenswrapper[4721]: I0202 14:28:52.584322 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerID="f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1" exitCode=0 Feb 02 14:28:52 crc kubenswrapper[4721]: I0202 14:28:52.584522 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerDied","Data":"f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1"} Feb 02 14:28:52 crc kubenswrapper[4721]: I0202 14:28:52.584644 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerStarted","Data":"15ed5f7bb533c11c83fb5d5905a7a20f12acc04f97e988e956205502593356c9"} Feb 02 14:28:53 crc kubenswrapper[4721]: I0202 14:28:53.603943 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerStarted","Data":"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b"} Feb 02 14:28:53 crc kubenswrapper[4721]: I0202 14:28:53.610509 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerStarted","Data":"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4"} Feb 02 14:28:54 crc kubenswrapper[4721]: I0202 14:28:54.622451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerStarted","Data":"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9"} Feb 02 14:28:54 crc kubenswrapper[4721]: I0202 14:28:54.625958 4721 generic.go:334] "Generic (PLEG): container finished" podID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerID="c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b" exitCode=0 Feb 02 14:28:54 crc kubenswrapper[4721]: I0202 14:28:54.626033 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerDied","Data":"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b"} Feb 02 14:28:54 crc kubenswrapper[4721]: I0202 14:28:54.628468 4721 generic.go:334] "Generic (PLEG): container finished" podID="b2305307-fc37-4522-abb5-6dc428e94e61" containerID="dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4" exitCode=0 Feb 02 14:28:54 crc kubenswrapper[4721]: I0202 14:28:54.628510 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerDied","Data":"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4"} Feb 02 14:28:55 crc kubenswrapper[4721]: I0202 14:28:55.643136 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerStarted","Data":"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f"} Feb 02 14:28:55 crc kubenswrapper[4721]: I0202 14:28:55.650029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerStarted","Data":"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8"} Feb 02 14:28:55 crc kubenswrapper[4721]: I0202 14:28:55.675012 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6zb9" podStartSLOduration=4.095566349 podStartE2EDuration="7.674982266s" podCreationTimestamp="2026-02-02 14:28:48 +0000 UTC" firstStartedPulling="2026-02-02 14:28:51.572507711 +0000 UTC m=+5271.875022120" lastFinishedPulling="2026-02-02 14:28:55.151923648 +0000 UTC m=+5275.454438037" observedRunningTime="2026-02-02 14:28:55.663731982 +0000 UTC m=+5275.966246381" watchObservedRunningTime="2026-02-02 14:28:55.674982266 +0000 UTC m=+5275.977496655" Feb 02 14:28:55 crc kubenswrapper[4721]: I0202 14:28:55.695489 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrptg" podStartSLOduration=4.024291394 podStartE2EDuration="7.695451249s" podCreationTimestamp="2026-02-02 14:28:48 +0000 UTC" firstStartedPulling="2026-02-02 14:28:51.568980296 +0000 UTC m=+5271.871494685" lastFinishedPulling="2026-02-02 14:28:55.240140151 +0000 UTC m=+5275.542654540" observedRunningTime="2026-02-02 14:28:55.690219168 +0000 UTC m=+5275.992733587" watchObservedRunningTime="2026-02-02 14:28:55.695451249 +0000 UTC m=+5275.997965638" Feb 02 14:28:58 crc kubenswrapper[4721]: I0202 14:28:58.892907 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:58 crc kubenswrapper[4721]: I0202 14:28:58.894912 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:58 crc kubenswrapper[4721]: I0202 14:28:58.954546 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:28:59 crc kubenswrapper[4721]: I0202 14:28:59.156708 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:59 crc kubenswrapper[4721]: I0202 14:28:59.156773 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:28:59 crc kubenswrapper[4721]: I0202 14:28:59.209142 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:29:00 crc kubenswrapper[4721]: I0202 14:29:00.709565 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerID="bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9" exitCode=0 Feb 02 14:29:00 crc kubenswrapper[4721]: I0202 14:29:00.711816 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerDied","Data":"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9"} Feb 02 14:29:01 crc kubenswrapper[4721]: I0202 14:29:01.173401 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:29:02 crc kubenswrapper[4721]: I0202 14:29:02.738758 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerStarted","Data":"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166"} Feb 02 14:29:02 crc kubenswrapper[4721]: I0202 14:29:02.765092 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqkjm" podStartSLOduration=2.622837155 podStartE2EDuration="11.765055463s" podCreationTimestamp="2026-02-02 14:28:51 +0000 UTC" firstStartedPulling="2026-02-02 14:28:52.589836621 +0000 UTC m=+5272.892351010" lastFinishedPulling="2026-02-02 14:29:01.732054929 +0000 UTC m=+5282.034569318" observedRunningTime="2026-02-02 14:29:02.764007644 +0000 UTC m=+5283.066522033" watchObservedRunningTime="2026-02-02 14:29:02.765055463 +0000 UTC m=+5283.067569862" Feb 02 14:29:03 crc kubenswrapper[4721]: I0202 14:29:03.755500 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:29:03 crc kubenswrapper[4721]: I0202 14:29:03.756292 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6zb9" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="registry-server" containerID="cri-o://ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f" gracePeriod=2 Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.428470 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.507206 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfvjj\" (UniqueName: \"kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj\") pod \"aa73e29e-aeec-4257-abab-cc99e8e99afa\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.508888 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content\") pod \"aa73e29e-aeec-4257-abab-cc99e8e99afa\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.509031 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities\") pod \"aa73e29e-aeec-4257-abab-cc99e8e99afa\" (UID: \"aa73e29e-aeec-4257-abab-cc99e8e99afa\") " Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.509932 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities" (OuterVolumeSpecName: "utilities") pod "aa73e29e-aeec-4257-abab-cc99e8e99afa" (UID: "aa73e29e-aeec-4257-abab-cc99e8e99afa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.510526 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.515398 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj" (OuterVolumeSpecName: "kube-api-access-tfvjj") pod "aa73e29e-aeec-4257-abab-cc99e8e99afa" (UID: "aa73e29e-aeec-4257-abab-cc99e8e99afa"). InnerVolumeSpecName "kube-api-access-tfvjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.561710 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa73e29e-aeec-4257-abab-cc99e8e99afa" (UID: "aa73e29e-aeec-4257-abab-cc99e8e99afa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.613343 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfvjj\" (UniqueName: \"kubernetes.io/projected/aa73e29e-aeec-4257-abab-cc99e8e99afa-kube-api-access-tfvjj\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.613383 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa73e29e-aeec-4257-abab-cc99e8e99afa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.776132 4721 generic.go:334] "Generic (PLEG): container finished" podID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerID="ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f" exitCode=0 Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.776538 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerDied","Data":"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f"} Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.776580 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6zb9" event={"ID":"aa73e29e-aeec-4257-abab-cc99e8e99afa","Type":"ContainerDied","Data":"c5d72c4cb5231aaa48bd8c849bdb7a55ce6b86d4b2df1ba54009fff2f035fd3c"} Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.776608 4721 scope.go:117] "RemoveContainer" containerID="ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.776828 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6zb9" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.810621 4721 scope.go:117] "RemoveContainer" containerID="c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.825985 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.840862 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6zb9"] Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.841388 4721 scope.go:117] "RemoveContainer" containerID="2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.899578 4721 scope.go:117] "RemoveContainer" containerID="ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f" Feb 02 14:29:04 crc kubenswrapper[4721]: E0202 14:29:04.900041 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f\": container with ID starting with ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f not found: ID does not exist" containerID="ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.900091 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f"} err="failed to get container status \"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f\": rpc error: code = NotFound desc = could not find container \"ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f\": container with ID starting with ebbf4cace5aefcc85a77e7f9064057583bd5cbf813e2854bd7ad81896b7b391f not found: ID does not exist" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.900123 4721 scope.go:117] "RemoveContainer" containerID="c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b" Feb 02 14:29:04 crc kubenswrapper[4721]: E0202 14:29:04.900460 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b\": container with ID starting with c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b not found: ID does not exist" containerID="c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.900489 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b"} err="failed to get container status \"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b\": rpc error: code = NotFound desc = could not find container \"c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b\": container with ID starting with c92eaf1ac3e8622070f78fbfd4bc88dd8c6eebf96d764ee8dc4c9bce9c8c2f4b not found: ID does not exist" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.900504 4721 scope.go:117] "RemoveContainer" containerID="2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5" Feb 02 14:29:04 crc kubenswrapper[4721]: E0202 14:29:04.900716 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5\": container with ID starting with 2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5 not found: ID does not exist" containerID="2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5" Feb 02 14:29:04 crc kubenswrapper[4721]: I0202 14:29:04.900744 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5"} err="failed to get container status \"2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5\": rpc error: code = NotFound desc = could not find container \"2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5\": container with ID starting with 2ae243d246dc83d8f161289ab6eb6c82efad98a9f4b7e1771cf1c0f294f7fcc5 not found: ID does not exist" Feb 02 14:29:06 crc kubenswrapper[4721]: I0202 14:29:06.423666 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" path="/var/lib/kubelet/pods/aa73e29e-aeec-4257-abab-cc99e8e99afa/volumes" Feb 02 14:29:09 crc kubenswrapper[4721]: I0202 14:29:09.207345 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:29:09 crc kubenswrapper[4721]: I0202 14:29:09.274123 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:29:09 crc kubenswrapper[4721]: I0202 14:29:09.829866 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrptg" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="registry-server" containerID="cri-o://99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8" gracePeriod=2 Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.348809 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.485581 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content\") pod \"b2305307-fc37-4522-abb5-6dc428e94e61\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.485740 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities\") pod \"b2305307-fc37-4522-abb5-6dc428e94e61\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.485849 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzcxd\" (UniqueName: \"kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd\") pod \"b2305307-fc37-4522-abb5-6dc428e94e61\" (UID: \"b2305307-fc37-4522-abb5-6dc428e94e61\") " Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.488124 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities" (OuterVolumeSpecName: "utilities") pod "b2305307-fc37-4522-abb5-6dc428e94e61" (UID: "b2305307-fc37-4522-abb5-6dc428e94e61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.501468 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd" (OuterVolumeSpecName: "kube-api-access-lzcxd") pod "b2305307-fc37-4522-abb5-6dc428e94e61" (UID: "b2305307-fc37-4522-abb5-6dc428e94e61"). InnerVolumeSpecName "kube-api-access-lzcxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.559225 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2305307-fc37-4522-abb5-6dc428e94e61" (UID: "b2305307-fc37-4522-abb5-6dc428e94e61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.590215 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.590240 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2305307-fc37-4522-abb5-6dc428e94e61-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.590250 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzcxd\" (UniqueName: \"kubernetes.io/projected/b2305307-fc37-4522-abb5-6dc428e94e61-kube-api-access-lzcxd\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.842866 4721 generic.go:334] "Generic (PLEG): container finished" podID="b2305307-fc37-4522-abb5-6dc428e94e61" containerID="99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8" exitCode=0 Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.842931 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerDied","Data":"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8"} Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.842968 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrptg" event={"ID":"b2305307-fc37-4522-abb5-6dc428e94e61","Type":"ContainerDied","Data":"037261c6a323f6dbd716adcfb2e228053f7afc8794d4b86219c39f4889d652e7"} Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.842987 4721 scope.go:117] "RemoveContainer" containerID="99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.843289 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrptg" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.870348 4721 scope.go:117] "RemoveContainer" containerID="dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.902276 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.912875 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrptg"] Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.930611 4721 scope.go:117] "RemoveContainer" containerID="45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.971334 4721 scope.go:117] "RemoveContainer" containerID="99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8" Feb 02 14:29:10 crc kubenswrapper[4721]: E0202 14:29:10.972324 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8\": container with ID starting with 99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8 not found: ID does not exist" containerID="99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.972382 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8"} err="failed to get container status \"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8\": rpc error: code = NotFound desc = could not find container \"99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8\": container with ID starting with 99be6af392b49413d48b4bcac1fa9431ca1a8e1ac8889b73f875ce9d9eb15be8 not found: ID does not exist" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.972419 4721 scope.go:117] "RemoveContainer" containerID="dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4" Feb 02 14:29:10 crc kubenswrapper[4721]: E0202 14:29:10.977714 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4\": container with ID starting with dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4 not found: ID does not exist" containerID="dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.977771 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4"} err="failed to get container status \"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4\": rpc error: code = NotFound desc = could not find container \"dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4\": container with ID starting with dd3627080e2dde3280df7f039683d58fe97f48d5c559afaab34d198530a21aa4 not found: ID does not exist" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.977809 4721 scope.go:117] "RemoveContainer" containerID="45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477" Feb 02 14:29:10 crc kubenswrapper[4721]: E0202 14:29:10.982649 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477\": container with ID starting with 45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477 not found: ID does not exist" containerID="45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477" Feb 02 14:29:10 crc kubenswrapper[4721]: I0202 14:29:10.982718 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477"} err="failed to get container status \"45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477\": rpc error: code = NotFound desc = could not find container \"45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477\": container with ID starting with 45f2eea2c429dbc41d6c0c9c84d8c1d4337067337a0e5187b2ddf2ea0476a477 not found: ID does not exist" Feb 02 14:29:11 crc kubenswrapper[4721]: I0202 14:29:11.512217 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:11 crc kubenswrapper[4721]: I0202 14:29:11.512395 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:12 crc kubenswrapper[4721]: I0202 14:29:12.432217 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" path="/var/lib/kubelet/pods/b2305307-fc37-4522-abb5-6dc428e94e61/volumes" Feb 02 14:29:12 crc kubenswrapper[4721]: I0202 14:29:12.563244 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqkjm" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" probeResult="failure" output=< Feb 02 14:29:12 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:29:12 crc kubenswrapper[4721]: > Feb 02 14:29:14 crc kubenswrapper[4721]: I0202 14:29:14.763694 4721 patch_prober.go:28] interesting pod/machine-config-daemon-rppjz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:29:14 crc kubenswrapper[4721]: I0202 14:29:14.764114 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:29:14 crc kubenswrapper[4721]: I0202 14:29:14.764167 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" Feb 02 14:29:14 crc kubenswrapper[4721]: I0202 14:29:14.765228 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a"} pod="openshift-machine-config-operator/machine-config-daemon-rppjz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:29:14 crc kubenswrapper[4721]: I0202 14:29:14.765291 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerName="machine-config-daemon" containerID="cri-o://12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" gracePeriod=600 Feb 02 14:29:14 crc kubenswrapper[4721]: E0202 14:29:14.896822 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:29:15 crc kubenswrapper[4721]: I0202 14:29:15.350686 4721 scope.go:117] "RemoveContainer" containerID="3d42ccd527c1c49d2ecc79f70ee1abbc38a002f9404fa0303071492300851127" Feb 02 14:29:15 crc kubenswrapper[4721]: I0202 14:29:15.914738 4721 generic.go:334] "Generic (PLEG): container finished" podID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" exitCode=0 Feb 02 14:29:15 crc kubenswrapper[4721]: I0202 14:29:15.914810 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" event={"ID":"bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877","Type":"ContainerDied","Data":"12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a"} Feb 02 14:29:15 crc kubenswrapper[4721]: I0202 14:29:15.915138 4721 scope.go:117] "RemoveContainer" containerID="80a4d97025e59da4be229a82e47246c7035ccedac52df0fa5679fdc8c1c7b8ec" Feb 02 14:29:15 crc kubenswrapper[4721]: I0202 14:29:15.915994 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:29:15 crc kubenswrapper[4721]: E0202 14:29:15.916503 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:29:22 crc kubenswrapper[4721]: I0202 14:29:22.565496 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqkjm" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" probeResult="failure" output=< Feb 02 14:29:22 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Feb 02 14:29:22 crc kubenswrapper[4721]: > Feb 02 14:29:29 crc kubenswrapper[4721]: I0202 14:29:29.410120 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:29:29 crc kubenswrapper[4721]: E0202 14:29:29.410780 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:29:31 crc kubenswrapper[4721]: I0202 14:29:31.571568 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:31 crc kubenswrapper[4721]: I0202 14:29:31.631160 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:31 crc kubenswrapper[4721]: I0202 14:29:31.819217 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.102556 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqkjm" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" containerID="cri-o://2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166" gracePeriod=2 Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.669905 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.774130 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdrfd\" (UniqueName: \"kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd\") pod \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.774197 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content\") pod \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.774400 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities\") pod \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\" (UID: \"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6\") " Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.776449 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities" (OuterVolumeSpecName: "utilities") pod "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" (UID: "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.781326 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd" (OuterVolumeSpecName: "kube-api-access-mdrfd") pod "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" (UID: "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6"). InnerVolumeSpecName "kube-api-access-mdrfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.876849 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.877189 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdrfd\" (UniqueName: \"kubernetes.io/projected/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-kube-api-access-mdrfd\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.898089 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" (UID: "ef3b13ac-f3ee-4b4b-a8eb-365a926869e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:29:33 crc kubenswrapper[4721]: I0202 14:29:33.978995 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.130776 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerID="2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166" exitCode=0 Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.130824 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerDied","Data":"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166"} Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.130856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkjm" event={"ID":"ef3b13ac-f3ee-4b4b-a8eb-365a926869e6","Type":"ContainerDied","Data":"15ed5f7bb533c11c83fb5d5905a7a20f12acc04f97e988e956205502593356c9"} Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.130877 4721 scope.go:117] "RemoveContainer" containerID="2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.131061 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkjm" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.159229 4721 scope.go:117] "RemoveContainer" containerID="bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.178458 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.189030 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqkjm"] Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.190780 4721 scope.go:117] "RemoveContainer" containerID="f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.258662 4721 scope.go:117] "RemoveContainer" containerID="2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166" Feb 02 14:29:34 crc kubenswrapper[4721]: E0202 14:29:34.259176 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166\": container with ID starting with 2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166 not found: ID does not exist" containerID="2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.259216 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166"} err="failed to get container status \"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166\": rpc error: code = NotFound desc = could not find container \"2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166\": container with ID starting with 2c5f9e0314e57b27f735487f6e47e6d06478739d292a9ee40102436d47095166 not found: ID does not exist" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.259249 4721 scope.go:117] "RemoveContainer" containerID="bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9" Feb 02 14:29:34 crc kubenswrapper[4721]: E0202 14:29:34.259587 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9\": container with ID starting with bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9 not found: ID does not exist" containerID="bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.259634 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9"} err="failed to get container status \"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9\": rpc error: code = NotFound desc = could not find container \"bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9\": container with ID starting with bb504c21a6cea0a7496dbfbf4e6c97a51631619cc916f155c9b3788feb3c7df9 not found: ID does not exist" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.259662 4721 scope.go:117] "RemoveContainer" containerID="f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1" Feb 02 14:29:34 crc kubenswrapper[4721]: E0202 14:29:34.259887 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1\": container with ID starting with f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1 not found: ID does not exist" containerID="f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.259908 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1"} err="failed to get container status \"f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1\": rpc error: code = NotFound desc = could not find container \"f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1\": container with ID starting with f1979fe9a92ae0c671cb466e891c9042ba98ed5b8fb336d632a9e9d82d0813c1 not found: ID does not exist" Feb 02 14:29:34 crc kubenswrapper[4721]: I0202 14:29:34.428013 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" path="/var/lib/kubelet/pods/ef3b13ac-f3ee-4b4b-a8eb-365a926869e6/volumes" Feb 02 14:29:40 crc kubenswrapper[4721]: I0202 14:29:40.417104 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:29:40 crc kubenswrapper[4721]: E0202 14:29:40.417983 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:29:49 crc kubenswrapper[4721]: I0202 14:29:49.332562 4721 generic.go:334] "Generic (PLEG): container finished" podID="56b96222-739f-41d2-996e-14e2ee91a139" containerID="ee046980151a1ce898253bd863e95503c1103f7eed2587fbfa57c95be0495a2a" exitCode=0 Feb 02 14:29:49 crc kubenswrapper[4721]: I0202 14:29:49.332638 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" event={"ID":"56b96222-739f-41d2-996e-14e2ee91a139","Type":"ContainerDied","Data":"ee046980151a1ce898253bd863e95503c1103f7eed2587fbfa57c95be0495a2a"} Feb 02 14:29:49 crc kubenswrapper[4721]: I0202 14:29:49.333989 4721 scope.go:117] "RemoveContainer" containerID="ee046980151a1ce898253bd863e95503c1103f7eed2587fbfa57c95be0495a2a" Feb 02 14:29:50 crc kubenswrapper[4721]: I0202 14:29:50.241606 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d2dxv_must-gather-kr4jl_56b96222-739f-41d2-996e-14e2ee91a139/gather/0.log" Feb 02 14:29:54 crc kubenswrapper[4721]: I0202 14:29:54.409986 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:29:54 crc kubenswrapper[4721]: E0202 14:29:54.410767 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:29:58 crc kubenswrapper[4721]: I0202 14:29:58.981825 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d2dxv/must-gather-kr4jl"] Feb 02 14:29:58 crc kubenswrapper[4721]: I0202 14:29:58.982705 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="copy" containerID="cri-o://79dee8f469c998ce3c1779dff4af47dcba83618a5b09996aa6983b70d3d26f3f" gracePeriod=2 Feb 02 14:29:58 crc kubenswrapper[4721]: I0202 14:29:58.991918 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d2dxv/must-gather-kr4jl"] Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.449798 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d2dxv_must-gather-kr4jl_56b96222-739f-41d2-996e-14e2ee91a139/copy/0.log" Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.451573 4721 generic.go:334] "Generic (PLEG): container finished" podID="56b96222-739f-41d2-996e-14e2ee91a139" containerID="79dee8f469c998ce3c1779dff4af47dcba83618a5b09996aa6983b70d3d26f3f" exitCode=143 Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.912537 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d2dxv_must-gather-kr4jl_56b96222-739f-41d2-996e-14e2ee91a139/copy/0.log" Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.913403 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.931411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output\") pod \"56b96222-739f-41d2-996e-14e2ee91a139\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.931707 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v576\" (UniqueName: \"kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576\") pod \"56b96222-739f-41d2-996e-14e2ee91a139\" (UID: \"56b96222-739f-41d2-996e-14e2ee91a139\") " Feb 02 14:29:59 crc kubenswrapper[4721]: I0202 14:29:59.939410 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576" (OuterVolumeSpecName: "kube-api-access-9v576") pod "56b96222-739f-41d2-996e-14e2ee91a139" (UID: "56b96222-739f-41d2-996e-14e2ee91a139"). InnerVolumeSpecName "kube-api-access-9v576". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.033565 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v576\" (UniqueName: \"kubernetes.io/projected/56b96222-739f-41d2-996e-14e2ee91a139-kube-api-access-9v576\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.087420 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56b96222-739f-41d2-996e-14e2ee91a139" (UID: "56b96222-739f-41d2-996e-14e2ee91a139"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.136026 4721 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56b96222-739f-41d2-996e-14e2ee91a139-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.163812 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh"] Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164740 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164763 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164784 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164794 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164826 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164834 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164850 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164857 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164866 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164873 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164891 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="gather" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164899 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="gather" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164910 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164918 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164936 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164944 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="extract-utilities" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164957 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164963 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="extract-content" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164972 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.164979 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: E0202 14:30:00.164994 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="copy" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165001 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="copy" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165304 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2305307-fc37-4522-abb5-6dc428e94e61" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165330 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="copy" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165355 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b96222-739f-41d2-996e-14e2ee91a139" containerName="gather" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165366 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa73e29e-aeec-4257-abab-cc99e8e99afa" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.165381 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3b13ac-f3ee-4b4b-a8eb-365a926869e6" containerName="registry-server" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.166445 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.169759 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.171429 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.179016 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh"] Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.240480 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxl4k\" (UniqueName: \"kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.240697 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.241055 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.343446 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.343546 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxl4k\" (UniqueName: \"kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.343682 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.344497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.347930 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.363019 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxl4k\" (UniqueName: \"kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k\") pod \"collect-profiles-29500710-st8kh\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.432739 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b96222-739f-41d2-996e-14e2ee91a139" path="/var/lib/kubelet/pods/56b96222-739f-41d2-996e-14e2ee91a139/volumes" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.478750 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d2dxv_must-gather-kr4jl_56b96222-739f-41d2-996e-14e2ee91a139/copy/0.log" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.479145 4721 scope.go:117] "RemoveContainer" containerID="79dee8f469c998ce3c1779dff4af47dcba83618a5b09996aa6983b70d3d26f3f" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.479266 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d2dxv/must-gather-kr4jl" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.486690 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:00 crc kubenswrapper[4721]: I0202 14:30:00.968856 4721 scope.go:117] "RemoveContainer" containerID="ee046980151a1ce898253bd863e95503c1103f7eed2587fbfa57c95be0495a2a" Feb 02 14:30:01 crc kubenswrapper[4721]: I0202 14:30:01.532710 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh"] Feb 02 14:30:02 crc kubenswrapper[4721]: I0202 14:30:02.506455 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" event={"ID":"e74898b2-80a7-41b4-b44d-e6fcf2409d49","Type":"ContainerStarted","Data":"14e5b815570fe82ab155ac70919cc247b7c3fef1e80c74818febeab15ae1519b"} Feb 02 14:30:02 crc kubenswrapper[4721]: I0202 14:30:02.507154 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" event={"ID":"e74898b2-80a7-41b4-b44d-e6fcf2409d49","Type":"ContainerStarted","Data":"4e45108cd7d7e75c5966932e75398503072a8a03e8e81731d9289ce2c1b8cdd3"} Feb 02 14:30:02 crc kubenswrapper[4721]: I0202 14:30:02.529703 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" podStartSLOduration=2.529678958 podStartE2EDuration="2.529678958s" podCreationTimestamp="2026-02-02 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:30:02.520914411 +0000 UTC m=+5342.823428820" watchObservedRunningTime="2026-02-02 14:30:02.529678958 +0000 UTC m=+5342.832193357" Feb 02 14:30:03 crc kubenswrapper[4721]: I0202 14:30:03.517657 4721 generic.go:334] "Generic (PLEG): container finished" podID="e74898b2-80a7-41b4-b44d-e6fcf2409d49" containerID="14e5b815570fe82ab155ac70919cc247b7c3fef1e80c74818febeab15ae1519b" exitCode=0 Feb 02 14:30:03 crc kubenswrapper[4721]: I0202 14:30:03.517755 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" event={"ID":"e74898b2-80a7-41b4-b44d-e6fcf2409d49","Type":"ContainerDied","Data":"14e5b815570fe82ab155ac70919cc247b7c3fef1e80c74818febeab15ae1519b"} Feb 02 14:30:04 crc kubenswrapper[4721]: I0202 14:30:04.947192 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.004925 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxl4k\" (UniqueName: \"kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k\") pod \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.004980 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume\") pod \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.005111 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume\") pod \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\" (UID: \"e74898b2-80a7-41b4-b44d-e6fcf2409d49\") " Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.006108 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume" (OuterVolumeSpecName: "config-volume") pod "e74898b2-80a7-41b4-b44d-e6fcf2409d49" (UID: "e74898b2-80a7-41b4-b44d-e6fcf2409d49"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.033760 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e74898b2-80a7-41b4-b44d-e6fcf2409d49" (UID: "e74898b2-80a7-41b4-b44d-e6fcf2409d49"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.034154 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k" (OuterVolumeSpecName: "kube-api-access-zxl4k") pod "e74898b2-80a7-41b4-b44d-e6fcf2409d49" (UID: "e74898b2-80a7-41b4-b44d-e6fcf2409d49"). InnerVolumeSpecName "kube-api-access-zxl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.108895 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74898b2-80a7-41b4-b44d-e6fcf2409d49-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.108932 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxl4k\" (UniqueName: \"kubernetes.io/projected/e74898b2-80a7-41b4-b44d-e6fcf2409d49-kube-api-access-zxl4k\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.108945 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74898b2-80a7-41b4-b44d-e6fcf2409d49-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.536214 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" event={"ID":"e74898b2-80a7-41b4-b44d-e6fcf2409d49","Type":"ContainerDied","Data":"4e45108cd7d7e75c5966932e75398503072a8a03e8e81731d9289ce2c1b8cdd3"} Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.536805 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e45108cd7d7e75c5966932e75398503072a8a03e8e81731d9289ce2c1b8cdd3" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.536250 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-st8kh" Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.593636 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh"] Feb 02 14:30:05 crc kubenswrapper[4721]: I0202 14:30:05.604946 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500665-gnnxh"] Feb 02 14:30:06 crc kubenswrapper[4721]: I0202 14:30:06.423297 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1f0398-e18b-44f0-b0a8-21f2de8af4d0" path="/var/lib/kubelet/pods/0c1f0398-e18b-44f0-b0a8-21f2de8af4d0/volumes" Feb 02 14:30:07 crc kubenswrapper[4721]: I0202 14:30:07.416664 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:30:07 crc kubenswrapper[4721]: E0202 14:30:07.417264 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:30:15 crc kubenswrapper[4721]: I0202 14:30:15.840374 4721 scope.go:117] "RemoveContainer" containerID="65b28cee5d7aba79bd5fdc801054328d6006f1cbcc896f6fdd692cc1c3bf2690" Feb 02 14:30:19 crc kubenswrapper[4721]: I0202 14:30:19.410044 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:30:19 crc kubenswrapper[4721]: E0202 14:30:19.410853 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.132774 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:33 crc kubenswrapper[4721]: E0202 14:30:33.134311 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74898b2-80a7-41b4-b44d-e6fcf2409d49" containerName="collect-profiles" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.134326 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74898b2-80a7-41b4-b44d-e6fcf2409d49" containerName="collect-profiles" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.134580 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74898b2-80a7-41b4-b44d-e6fcf2409d49" containerName="collect-profiles" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.136713 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.148931 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.205775 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.205847 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlz6\" (UniqueName: \"kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.205966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.319128 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.319473 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlz6\" (UniqueName: \"kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.319696 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.320653 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.321549 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.469716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlz6\" (UniqueName: \"kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6\") pod \"redhat-marketplace-qkd8k\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:33 crc kubenswrapper[4721]: I0202 14:30:33.764305 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:34 crc kubenswrapper[4721]: I0202 14:30:34.261300 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:34 crc kubenswrapper[4721]: I0202 14:30:34.409244 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:30:34 crc kubenswrapper[4721]: E0202 14:30:34.409533 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:30:34 crc kubenswrapper[4721]: I0202 14:30:34.844084 4721 generic.go:334] "Generic (PLEG): container finished" podID="3688a7b5-c948-4eab-8be8-2206c13a2af4" containerID="bdd9f70acfc914613f4ccaf5ae253dff448e5366e8d44d9c8f70f0811a2239c0" exitCode=0 Feb 02 14:30:34 crc kubenswrapper[4721]: I0202 14:30:34.844202 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerDied","Data":"bdd9f70acfc914613f4ccaf5ae253dff448e5366e8d44d9c8f70f0811a2239c0"} Feb 02 14:30:34 crc kubenswrapper[4721]: I0202 14:30:34.844500 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerStarted","Data":"c375a82bdfcb0486ffd2febfb8cf261cb67c6154240c2840964db47a65af8885"} Feb 02 14:30:36 crc kubenswrapper[4721]: I0202 14:30:36.870060 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerStarted","Data":"f84d65de81662ae82217c9ed34d813c8add0279b72245d302e8ae2471cca8b71"} Feb 02 14:30:37 crc kubenswrapper[4721]: I0202 14:30:37.884968 4721 generic.go:334] "Generic (PLEG): container finished" podID="3688a7b5-c948-4eab-8be8-2206c13a2af4" containerID="f84d65de81662ae82217c9ed34d813c8add0279b72245d302e8ae2471cca8b71" exitCode=0 Feb 02 14:30:37 crc kubenswrapper[4721]: I0202 14:30:37.885095 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerDied","Data":"f84d65de81662ae82217c9ed34d813c8add0279b72245d302e8ae2471cca8b71"} Feb 02 14:30:38 crc kubenswrapper[4721]: I0202 14:30:38.898145 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerStarted","Data":"e4f07d6b243c486bb07b9ba79db513a67c0f430b043761f4c17f0a1c063e001d"} Feb 02 14:30:38 crc kubenswrapper[4721]: I0202 14:30:38.928685 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkd8k" podStartSLOduration=2.475175078 podStartE2EDuration="5.928659032s" podCreationTimestamp="2026-02-02 14:30:33 +0000 UTC" firstStartedPulling="2026-02-02 14:30:34.847506863 +0000 UTC m=+5375.150021272" lastFinishedPulling="2026-02-02 14:30:38.300990837 +0000 UTC m=+5378.603505226" observedRunningTime="2026-02-02 14:30:38.919148405 +0000 UTC m=+5379.221662814" watchObservedRunningTime="2026-02-02 14:30:38.928659032 +0000 UTC m=+5379.231173441" Feb 02 14:30:43 crc kubenswrapper[4721]: I0202 14:30:43.764767 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:43 crc kubenswrapper[4721]: I0202 14:30:43.765456 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:43 crc kubenswrapper[4721]: I0202 14:30:43.812712 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:43 crc kubenswrapper[4721]: I0202 14:30:43.992436 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:44 crc kubenswrapper[4721]: I0202 14:30:44.060934 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:45 crc kubenswrapper[4721]: I0202 14:30:45.965346 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkd8k" podUID="3688a7b5-c948-4eab-8be8-2206c13a2af4" containerName="registry-server" containerID="cri-o://e4f07d6b243c486bb07b9ba79db513a67c0f430b043761f4c17f0a1c063e001d" gracePeriod=2 Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.004228 4721 generic.go:334] "Generic (PLEG): container finished" podID="3688a7b5-c948-4eab-8be8-2206c13a2af4" containerID="e4f07d6b243c486bb07b9ba79db513a67c0f430b043761f4c17f0a1c063e001d" exitCode=0 Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.004328 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerDied","Data":"e4f07d6b243c486bb07b9ba79db513a67c0f430b043761f4c17f0a1c063e001d"} Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.418713 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.573478 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities\") pod \"3688a7b5-c948-4eab-8be8-2206c13a2af4\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.573838 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content\") pod \"3688a7b5-c948-4eab-8be8-2206c13a2af4\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.574148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svlz6\" (UniqueName: \"kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6\") pod \"3688a7b5-c948-4eab-8be8-2206c13a2af4\" (UID: \"3688a7b5-c948-4eab-8be8-2206c13a2af4\") " Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.574726 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities" (OuterVolumeSpecName: "utilities") pod "3688a7b5-c948-4eab-8be8-2206c13a2af4" (UID: "3688a7b5-c948-4eab-8be8-2206c13a2af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.575371 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.580628 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6" (OuterVolumeSpecName: "kube-api-access-svlz6") pod "3688a7b5-c948-4eab-8be8-2206c13a2af4" (UID: "3688a7b5-c948-4eab-8be8-2206c13a2af4"). InnerVolumeSpecName "kube-api-access-svlz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.596053 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3688a7b5-c948-4eab-8be8-2206c13a2af4" (UID: "3688a7b5-c948-4eab-8be8-2206c13a2af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.676847 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svlz6\" (UniqueName: \"kubernetes.io/projected/3688a7b5-c948-4eab-8be8-2206c13a2af4-kube-api-access-svlz6\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:47 crc kubenswrapper[4721]: I0202 14:30:47.676893 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3688a7b5-c948-4eab-8be8-2206c13a2af4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.015826 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkd8k" event={"ID":"3688a7b5-c948-4eab-8be8-2206c13a2af4","Type":"ContainerDied","Data":"c375a82bdfcb0486ffd2febfb8cf261cb67c6154240c2840964db47a65af8885"} Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.015882 4721 scope.go:117] "RemoveContainer" containerID="e4f07d6b243c486bb07b9ba79db513a67c0f430b043761f4c17f0a1c063e001d" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.015954 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkd8k" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.037812 4721 scope.go:117] "RemoveContainer" containerID="f84d65de81662ae82217c9ed34d813c8add0279b72245d302e8ae2471cca8b71" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.061711 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.068227 4721 scope.go:117] "RemoveContainer" containerID="bdd9f70acfc914613f4ccaf5ae253dff448e5366e8d44d9c8f70f0811a2239c0" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.084172 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkd8k"] Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.410189 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:30:48 crc kubenswrapper[4721]: E0202 14:30:48.410803 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:30:48 crc kubenswrapper[4721]: I0202 14:30:48.425442 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3688a7b5-c948-4eab-8be8-2206c13a2af4" path="/var/lib/kubelet/pods/3688a7b5-c948-4eab-8be8-2206c13a2af4/volumes" Feb 02 14:31:03 crc kubenswrapper[4721]: I0202 14:31:03.410223 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:31:03 crc kubenswrapper[4721]: E0202 14:31:03.411284 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:31:14 crc kubenswrapper[4721]: I0202 14:31:14.432562 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:31:14 crc kubenswrapper[4721]: E0202 14:31:14.433382 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:31:28 crc kubenswrapper[4721]: I0202 14:31:28.410332 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:31:28 crc kubenswrapper[4721]: E0202 14:31:28.413404 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:31:42 crc kubenswrapper[4721]: I0202 14:31:42.410554 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:31:42 crc kubenswrapper[4721]: E0202 14:31:42.411369 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:31:55 crc kubenswrapper[4721]: I0202 14:31:55.409954 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:31:55 crc kubenswrapper[4721]: E0202 14:31:55.410650 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:32:09 crc kubenswrapper[4721]: I0202 14:32:09.410325 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:32:09 crc kubenswrapper[4721]: E0202 14:32:09.411539 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:32:22 crc kubenswrapper[4721]: I0202 14:32:22.409451 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:32:22 crc kubenswrapper[4721]: E0202 14:32:22.412114 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:32:34 crc kubenswrapper[4721]: I0202 14:32:34.409621 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:32:34 crc kubenswrapper[4721]: E0202 14:32:34.410419 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:32:47 crc kubenswrapper[4721]: I0202 14:32:47.410922 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:32:47 crc kubenswrapper[4721]: E0202 14:32:47.412391 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:32:59 crc kubenswrapper[4721]: I0202 14:32:59.410517 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:32:59 crc kubenswrapper[4721]: E0202 14:32:59.412387 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:33:12 crc kubenswrapper[4721]: I0202 14:33:12.410028 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:33:12 crc kubenswrapper[4721]: E0202 14:33:12.410843 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:33:25 crc kubenswrapper[4721]: I0202 14:33:25.410053 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:33:25 crc kubenswrapper[4721]: E0202 14:33:25.411015 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:33:38 crc kubenswrapper[4721]: I0202 14:33:38.411595 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:33:38 crc kubenswrapper[4721]: E0202 14:33:38.412318 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877" Feb 02 14:33:50 crc kubenswrapper[4721]: I0202 14:33:50.417246 4721 scope.go:117] "RemoveContainer" containerID="12a34d4692995e23be9650f3fe27555782fc4f8063d028a49b067d6507a1f26a" Feb 02 14:33:50 crc kubenswrapper[4721]: E0202 14:33:50.418057 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rppjz_openshift-machine-config-operator(bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877)\"" pod="openshift-machine-config-operator/machine-config-daemon-rppjz" podUID="bc8c3bf4-0f02-47a1-b8b8-1e40a8daa877"